专利摘要:
image processing apparatus, screen, image processing method, and screen data signal An image processing apparatus comprises a receiver (201) for receiving an image signal comprising an encoded image. another receiver (1701) receives a data signal from a screen (107) where the data signal comprises a data field comprising an indication of the dynamic range of the screen of the screen (107). the display's dynamic range indication comprises at least one luminance specification for the display. a dynamic range processor (203) is arranged to generate an output image by applying a dynamic range transform to the encoded image in response to the display's dynamic range indication. an output (205) outputs an output image signal comprising the output image to the screen. the transformation may further be performed in response to a reference from the target screen indicative of a dynamic range of the screen to which the encoded image is encoded. the invention can be used to generate an improved high dynamic range (hdr) image from, for example, a low dynamic range (ldr) image, or vice versa.
公开号:BR112014006978B1
申请号:R112014006978-6
申请日:2012-09-20
公开日:2021-08-31
发明作者:Charles Leonardus Cornelius Maria Knibbeler;Renatus Josephus Van Der Vleuten
申请人:Koninklijke Philips N.V.;
IPC主号:
专利说明:

FIELD OF THE INVENTION
[0001] The invention relates to dynamic range transforms for images, and in particular but not exclusively to image processing to generate High Dynamic Range images from Low Dynamic Range images or to generate Low Dynamic Range images from images of High Dynamic Range. HISTORY OF THE INVENTION
[0002] Digital coding of various source signals has become very important in recent decades as digital signal representation and communication has increasingly replaced analogous representation and communication. Ongoing research and development is underway on how to improve the obtainable quality of the encoded images and video sequences while at the same time keeping the data rate at acceptable levels.
[0003] An important factor for perceived image quality is the dynamic range that can be reproduced when an Image is displayed. Conventionally, the dynamic range of reproduced images tended to be substantially reduced relative to normal vision. In fact, luminance levels found in the real world span a dynamic range as large as 14 orders of magnitude, ranging from a moonless night to looking directly at the sun. The dynamic range of instantaneous luminance and the response of the corresponding human visual system can drop 10,000:1 and 100,000:1 on sunny days or at night (bright reflections against dark shadow regions). Traditionally, the dynamic range of displays has been confined to about 2-3 orders of magnitude, and yet sensors have had a limited range, eg <10,000:1 depending on the acceptability of noise. Consequently, it has traditionally been possible to store and transmit images in 8-bit gamma encoded formats without introducing perceptually visible artifacts into traditional interpretation devices. However, in an effort to record more accurate and lively images, new high dynamic range (HDR) image sensors that can record dynamic ranges of more than 6 orders of magnitude have been developed. In addition, most special effects, computer graphics enhancements, and other post-production work are already routinely performed at greater bit depths and with greater dynamic ranges.
[0004] In addition, the contrast and maximum luminance of prior art display systems continue to increase. Recently, new prototype screens have been introduced with a maximum luminance as high as 3000 Cd/m2 and contrast indices of 5-6 orders of magnitude (screen origin, the viewing environment will also finally affect the interpreted contrast index, which can for daytime television viewing still drops below 50:1). It is expected that future displays can provide even higher dynamic ranges and specifically higher luminance and contrast ratios. When traditionally encoded 8-bit signals are displayed on these screens, quantization of noise and clipping artifacts may appear. Furthermore, traditional video formats offer insufficient height and accuracy to convey the rich information contained in the new HDR image.
[0005] As a result, there is a growing need for new approaches that allow a consumer to fully benefit from the capabilities of prior art (and future) sensors and display systems. Preferably, representations of this additional information are backward compatible so that legacy equipment can still receive the common video streams, while new HDR-enabled devices can take full advantage of the additional information driven by the new format. Thus, it is desirable that the encoded video data not only represent the HDR images, but also allow the encoding of the corresponding traditional low dynamic range (LDR) images that can be displayed on conventional equipment.
[0006] To successfully introduce HDR systems and to fully exploit the promise of HDR, it is important that the approach taken provides both backwards compatibility and allows for optimization or at least adaptation to HDR screens. However, this inherently involves a conflict between optimization for HDR and optimization for traditional LDR.
[0007] For example, typically image content, such as video clips, will be processed in the studio (color classification and tone mapping) for optimal appearance on a specific screen. Traditionally, this optimization has been performed for LDR screens. For example, when producing for a standard LDR canvas, color grading experts will balance many aspects of photo quality to create the desired 'look' for the plot. This can involve balancing regional and local contrasts, sometimes even deliberately clipping pixels. For example, on a screen with relatively low peak brightness, bursts or light highlights are often severely cropped to convey a high-light impression to the viewer (the same is true for dark shadow details on screens with low black levels). This operation will typically be performed assuming a nominal LDR screen and traditionally the screens have deviated somewhat relatively from the nominal LDR screens as still virtually all consumer screens are LDR screens.
[0008] However, if the film was adapted to a target HDR screen, the result would be very different. In fact, color experts would perform an optimization that would result in a very different code mapping. For example, not only can it highlight and shadow details if better preserved on HDR screens, but these can even be optimized to have different mid-gray distribution tones. Thus, an ideal HDR image is not achieved by simply scaling an LDR image by a value corresponding to the difference in white point luminances (the maximum attainable brightness).
[0009] Ideally, separate color classifications and tone mappings would be performed for each possible dynamic range of a screen. For example, a video sequence would be for a maximum white point luminance of 500 Cd/m2, one for 1000 Cd/m2, one for 1500 Cd/m2 etc. as bright as possible. A given screen could then simply select the video sequence corresponding to its brightness. However, such an approach is impractical, as it requires a large number of video sequences to be generated, thus increasing the resource needed to generate these different video sequences. Furthermore, the necessary storage and distribution capacity would increase substantially. Furthermore, the approach would limit the maximum possible brightness level to discrete levels thus providing sub-optimal performance for screens with maximum brightness levels between the levels at which the video sequences are being provided. Furthermore, such an approach will not allow future screens developed with maximum light levels higher than for the highest light level video stream to be scanned.
[0010] Of course, only a limited video sequence number is expected to be created on the content provision side, and dynamic range automatic conversions are expected to be applied at later points in the distribution chain on these video sequences to generate a video sequence suitable for the specific screen on which the video sequence is interpreted. However, in these approaches the resulting image quality is highly dependent on automatic dynamic range conversion.
[0011] Thus, an improved approach to support different dynamic range images, and preferably to support different dynamic range images, would be advantageous. SUMMARY OF THE INVENTION
[0012] Of course, the invention preferably seeks to mitigate, alleviate or eliminate one or more of the above mentioned disadvantages alone or in any combination.
[0013] According to an aspect of the invention there is provided an image processing apparatus comprising: a receiver for receiving an image signal, the image signal comprising at least a first encoded image and a first target screen reference, the first target screen reference being indicative of a dynamic range of a first target screen to which the encoded first image is encoded; a dynamic range processor arranged to generate an output image by applying a dynamic range transform to the first encoded image in response to the first target screen reference; and an output for outputting an output image signal comprising the output image.
[0014] The invention can allow a system to support different dynamic range images and/or screens. In particular, the approach can allow for improved dynamic range transforms that can adapt the specific characteristics of image interpretation. In many scenarios an enhanced dynamic range transform of LDR to HDR or HDR to LDR images can be achieved.
[0015] In some embodiments, the dynamic range transform increases a dynamic range of the output video signal with respect to the first encoded image. In some embodiments, the dynamic range transform reduces a dynamic range of the output video signal with respect to the first encoded image.
[0016] A dynamic range corresponds to an interpretation luminance range, that is, a range from a minimum light output to a maximum light output for the interpreted image. Thus, a dynamic range is not merely an index between a maximum value and a minimum value, or a quantization measurement (such as a number of bits), but corresponds to an actual luminance range for an interpretation of an image. Thus, a dynamic range can be a range of luminance values, for example, measured in candela per square meter (cd/m2) which is also referred to as nits. A dynamic range is then the luminance range of the emitted light (brightness) corresponding to the lowest luminance value (generally assumed to be absolute black, ie no emitted light) in the emitted light (brightness) corresponding to the highest luminance value. The dynamic range can specifically be characterized by the highest emitted light value, also referred to as the white point, white point luminance, white luminance or maximum luminance. For LDR images and LDR screens, the white point is typically 500 nits or less.
[0017] The output image signal can specifically be inserted into a screen having a specific dynamic range, and thus the dynamic range transform can convert the encoded image of a dynamic range indicated by the target screen reference into a dynamic range of the screen in which the image is interpreted.
[0018] The image can be an image of a moving image sequence, such as a structure or image of a video sequence. As another example, the image could be a permanent backlight or, for example, an overlay image like graphics etc.
[0019] The first encoded image may specifically be an LDR image and the output image may be an HDR image. The first encoded image can specifically be an HDR image and the output image can be an LDR image.
[0020] According to an optional feature of the invention, the first target screen reference comprises a white point luminance of the first target screen.
[0021] This can provide advantageous operation in many embodiments. In particular, it can allow for low complexity and/or low expense while providing enough information to allow an improved dynamic range transform to be performed.
[0022] According to an optional feature of the invention, the first reference of the target screen comprises an indication of the Electro-Optical Transfer Function for the first target screen.
[0023] This can provide advantageous operation in many embodiments. In particular, it can allow for low complexity and/or low expense while providing enough information to allow an improved dynamic range transform to be performed. The approach can in particular allow the dynamic range transform to also adapt to specific characteristics for, for example, mid-range luminances. For example, it can allow the dynamic range transform to account for differences in the gamma of the target screen and the end-user screen.
[0024] According to an optional feature of the invention, the first target screen reference comprises a tone mapping indication representing a tone mapping used to generate the first encoded image for the first target screen.
[0025] This can allow an enhanced dynamic range transform to be performed in many scenarios, and can specifically allow the dynamic range transform to compensate for the specific characteristics of the tone mapping performed on the content creation side.
[0026] In some scenarios, the image processing device can then consider both the characteristics of the screen for which the encoded image has been optimized and the characteristics of the specific tone mapping. This can, for example, allow for subjectivity and, for example, artistic tone mapping decisions to be considered when transforming an image from one dynamic range to another.
[0027] According to an optional feature of the invention, the image signal is further characterized by comprising a data field comprising the dynamic range control data transform; and the dynamic range processor is further arranged to perform the dynamic range transform in response to the dynamic range control data transform.
[0028] This can provide improved performance and/or functionality on many systems. In particular, it can allow localized and targeted adaptation on specific dynamic range screens while still allowing the content provider side to retain some control over the resulting images.
[0029] The dynamic range control data transform may include data that specify the characteristics of the dynamic range transform that should and/or may be applied and/or may specify the recommended characteristics of the dynamic range transform.
[0030] According to an optional feature of the invention, the dynamic range control data transform comprises different parameters of the dynamic range transform for different levels of the maximum luminance screen.
[0031] This can provide improved control and/or adaptation in many achievements. In particular, it may allow the image processing device 103 to select and apply control data appropriate to the specific dynamic range of the output image that is generated.
[0032] According to an optional feature of the invention, the dynamic range control data transform comprises different tone mapping parameters for different levels of the maximum luminance screen, and the dynamic range processor is arranged to determine the mapping parameters from the tone to dynamic range transform in response to different tone mapping parameters and a maximum luminance for the output image signal.
[0033] This can provide control and/or improved adaptation in many accomplishments. In particular, it may allow the image processing device 103 to select and apply the appropriate control data for the specific dynamic range for which the output image is generated. Tone mapping parameters can specifically provide parameters that should, can, or are recommended for the dynamic range transform.
[0034] According to an optional feature of the invention, the dynamic range control data transform comprises the data that define a set of transform parameters that must be applied by the dynamic range transform.
[0035] This can allow one side of the content provider to retain control over the images interpreted on the screens supported by the image processing device. This can guarantee homogeneity between different interpretation situations. The approach might, for example, allow a content provider to ensure that the artistic impression of the image will remain unchanged when interpreted across different screens.
[0036] According to an optional feature of the invention, the dynamic range control data transform comprises data defining the limits for the transform parameters to be applied by the dynamic range transform.
[0037] This can provide improved operations and an improved user experience in many achievements. In particular, it can in many scenarios allow for an improved compromise between a content provider's desire to retain control over the interpretation of their content while allowing an end user to standardize their preferences.
[0038] According to an optional feature of the invention, the dynamic range control data transform comprises different transform control data for different image categories.
[0039] This can provide improved image transforms in many scenarios. In particular it can allow the dynamic range transform to be optimized for the individual characteristics of different images. For example, different dynamic range transforms can be applied to images corresponding to the main image, images corresponding to graphics, images corresponding to a background, etc.
[0040] According to an optional feature of the invention, a maximum luminance of the dynamic range of the first target screen is less than 1000 nits.
[0041] The image to be transformed can be an HDR image. Dynamic range transform can transform such an HDR image into another HDR image (associated with a screen having a dynamic range not less than 1000 nits) having a different dynamic range. Thus, improved image quality can be achieved by converting an HDR image for one dynamic range into another HDR image for another dynamic range (which may have a higher or lower white point luminance).
[0042] According to an optional feature of the invention, the image signal comprises a second encoded image and a second target screen reference, the second target screen reference being indicative of a dynamic range of a second target screen for which the second encoded image is encoded, the dynamic range of the second target screen being different from the dynamic range of the first target screen; and the dynamic range processor is arranged to apply the dynamic range transform to the second encoded image in response to the second reference of the target screen.
[0043] This can allow for improved output quality in many scenarios. In particular, different transformations can be applied to the first encoded image and to the second encoded image depending on the differences of the associated target screens (and typically dependent on how each of these relates to the desired dynamic range of the output image).
[0044] According to an optional feature of the invention, the dynamic range image processor is arranged to generate the output image by combining the first encoded image and the second encoded image.
[0045] This can provide improved image quality in many achievements and scenarios. In some scenarios, the combination can be a selection combination where the combination is performed by simply selecting one of the images.
[0046] According to an optional feature of the invention, the image processing apparatus further comprises: a receiver for receiving a data signal from a screen, the data signal comprising a data field comprising an indication of the dynamic range of the display screen, indicating the dynamic range of the screen comprising at least one luminance specification; and the dynamic range processor is arranged to apply the dynamic range transform to the first encoded image in response to the display's dynamic range indication.
[0047] This can allow for improved interpretation of the image in many embodiments.
[0048] According to an optional feature of the invention, the dynamic range processor is arranged to select between generating the output image as the first encoded image and generating the output image as a transformed image of the first encoded image in response to the first target screen reference.
[0049] This can allow for improved interpretation of the image in many realizations and/or can reduce the computational burden. For example, if the end user's screen has a dynamic range that is very close to that of the encoded image that was generated, the improved quality of the interpreted image will typically be obtained if the received image is used directly. However, if the dynamic ranges are sufficiently different, the improved quality is obtained by processing the image to adapt it to the different dynamic ranges. In some embodiments, the dynamic range transform can simply be adapted between a null operation (using the first directly encoded image) and applying a fixed, predetermined dynamic range transform if the target screen reference is sufficiently different from the end user's screen.
[0050] According to an optional feature of the invention, the dynamic range transform comprises a gamma transform.
[0051] This can allow an enhanced output image to be generated in many realizations and scenarios. In particular, it can allow for improved perceived color interpretation and can, for example, compensate for changes in color perception resulting from changes in the lightness of image areas. In some embodiments the dynamic range transform may consist of a gamma transform.
[0052] According to an optional feature of the invention, the image processing apparatus further comprises a control data transmitter for transmitting the dynamic range control data to an image signal source.
[0053] This can allow the source to adapt the image signal in response to dynamic range control data. The dynamic range control data may specifically comprise an indication of a preferred dynamic range for the image, and/or an indication of a dynamic range (eg white point luminance and optionally EOTF or gamma function) for the display screen. final user.
[0054] According to an aspect of the invention there is provided an image signal source apparatus comprising: a receiver for receiving an encoded image; a generator for generating an image signal comprising the encoded image and a target screen reference indicative of a dynamic range of a target screen to which the encoded image is encoded; a transmitter to transmit the image signal
[0055] According to one aspect of the invention there is provided an image processing method comprising:
[0056] receiving an image signal, the image signal comprising at least a first encoded image and a first target screen reference, the first target screen reference being indicative of a dynamic range of a first target screen for which the first encoded image is encoded;
[0057] generating an output image by applying a dynamic range transform to the first encoded image in response to the first target screen reference; and
[0058] issuing an output image signal comprising the output image.
[0059] According to an aspect of the invention there is provided a method for transmitting an image signal, the method comprising: receiving an encoded image; generating an image signal comprising the encoded image and a target screen reference indicative of a dynamic range of a target screen to which an encoded image is encoded; and transmit the image signal.
[0060] According to an aspect of the invention there is provided an image signal comprising at least a first encoded image and a first target screen reference, the first target screen reference being indicative of a dynamic range of a first target screen for the which the first encoded image is encoded.
[0061] These and other aspects, features and advantages of the invention will be evident and clarified with reference to the embodiment(s) described below. BRIEF DESCRIPTION OF THE DRAWINGS
[0062] The embodiments of the invention will be described, by way of example only, with reference to the drawings, in which:
[0063] Figure 1 is an example illustration of elements of an image interpretation system according to some embodiments of the invention;
[0064] Figure 2 is an illustration of an example of elements of an image processing apparatus;
[0065] Figure 3 illustrates an example of a mapping for an image processing apparatus;
[0066] Figure 4 illustrates an example of a Electro-Optical Transfer Function (EOTF) for a screen;
[0067] Figure 5 illustrates an example of the model for the presentation plans in HDMV-2D mode of the Blu-TM rayTM standard;
[0068] Figure 6 illustrates an example of dynamic range processing for HDR and LDR images;
[0069] Figure 7 illustrates an example of a mapping for an image processing apparatus;
[0070] Figures 8-10 illustrate example of images with different dynamic range transforms when presented on the same screen;
[0071] Figure 11 illustrates an example of a relationship between luminance values and possible mappings for an image processing apparatus;
[0072] Figure 12 illustrates an example of a mapping for a
[0073] processing apparatus 13 illustrates an image processing apparatus; example of a mapping to a
[0074] processing apparatus 14 illustrates the image processing apparatus; structure of a graphics stream
[0075] according to the Blu-standard Figure 15 illustrates TM rayTM; an example of dynamic range processing for an image and an associated overlay graphics image;
[0076] Figure 16 illustrates an example of dynamic range processing for an image and graphics;
[0077] Figure 17 is an illustration of an example of elements of an image processing apparatus;
[0078] Figure 18 illustrates an example of a mapping for a
[0079] The image processing figure; 19 is an illustration of an example of elements of an image processing apparatus;
[0080] Figure 20 illustrates an example of a mapping for a
[0081] The image processing figure; 21 is an illustration of an example of elements of a screen according to some embodiments of the invention;
[0082] Figure 22 is an illustration of an example of elements of an image processing apparatus; and
[0083] Figure 23 schematically illustrates a generation of an 8-bit image that encodes an HDR image by means of an encoding apparatus. DETAILED DESCRIPTION OF SOME ACHIEVEMENTS OF THE INVENTION
[0084] Figure 1 illustrates an example of an audiovisual distribution pass. In the example, a content provider apparatus 101 generates an audiovisual content signal for an item of audiovisual content, such as a movie, television program, etc. The content provider apparatus 101 can specifically encode the audiovisual content in accordance with a suitable encoding format and color representation. In particular, the content provider apparatus 101 can encode the images of a video sequence of the audiovisual content item in accordance with a suitable representation such as, for example, YCrCb. The content provider apparatus 101 can be considered to represent a production and distribution house that creates and broadcasts the content.
[0085] audiovisual content signal is then distributed to an image processing device 103 via a distribution passage 105. The image processing device 103 may, for example, be a television set box residing with a consumer specific to the content item, such as a Personal Video Recorder, a Blu-rayTM player, a network transmission device (eg Internet), a satellite or terrestrial television receiver, etc.
[0086] The audiovisual content is encoded and distributed from the content provider apparatus 101 through a medium, which may, for example, consist of the packaged medium or a communication medium. It then reaches a source device in the form of the image processing device 103 which comprises the functionality to decode and reproduce the content.
[0087] It will be appreciated that the distribution gateway 105 may be any distribution gateway and through any means or using any suitable communication standard. Furthermore, the distribution pass does not need to be real-time, but may include permanent or temporary storage, for example, the distribution pass may include the Internet, satellite, cable or terrestrial transmission, a mobile or fixed communication network etc., or storage on physically distributed media such as DVD or Blu-ray DiscTM or a memory card etc...
[0088] The image processing device 103 is coupled to a screen 107 via a communication passage 109. The image processing device 103 generates a display signal representing the item of audiovisual content. Thus, the source device transmits the decoded content to a dissipating device, which can be a TV or other device that converts the digital signals into a physical representation.
[0089] The image processing device 103 can perform, for example, image enhancement or signal processing algorithms on the data and can specifically decode and redecode the (processed) audiovisual signal. The redecoding may specifically be in a different encoding format or representation than for the received signal.
[0090] The system of figure 1 is in some embodiments arranged to provide High Dynamic Range (HDR) video information to the screen 107 and in other embodiments or scenarios is arranged to provide a Low Dynamic Range (LDR) image to the screen 107. Still, to provide, for example, improved backwards compatibility, it can in some scenarios provide both an LDR image and an HDR image depending on the screen on which it is displayed. Specifically, the system can communicate/distribute image signals referring to both LDR and HDR images.
[0091] Conventional screens typically use an LDR representation. Typically these LDR representations are provided by an 8-bit representation with three components related to the specific first ones. For example, an RGB color representation can be provided by three 8-bit samples referenced to a Red, Green and Blue primary respectfully. Another representation uses a luma component and two color components (such as YCrCb). These LDR representations correspond to a given brightness or luminance range.
[0092] HDR specifically allows significantly lighter images (or areas of the image) to be presented correctly on HDR screens. In fact, an HDR image displayed on an HDR screen can provide a substantially brighter white than can be provided by the corresponding LDR image displayed on an LDR screen. In fact, an HDR screen can typically allow at least four times brighter white than an LDR screen. Clarity can specifically be measured against the darkest black that can be represented, or it can be measured against a given level of gray or black.
[0093] The LDR image can specifically correspond to specific parameters of the screen, such as a fixed bit resolution related to a specific set of first and/or specific white point. For example, 8-bits can be provided for a given set of first RGB and, for example, a white point of 500 Cd/m2. The HDR image is an image that includes data that should be interpreted above these restrictions. In particular, a brightness can be more than four times brighter than the white point (eg 2000 Cd/m2) or more.
[0094] High dynamic range pixel values have a luminance contrast range (lightest luminance in the set of pixels divided by the darkest luminance) that is (much) greater than a range that can be faithfully displayed on standardized screens in the NTSC and MPEG-2 was (with their first typical RGB, and a D65 white with maximum trigger level [255, 255, 255] a reference clarity of eg 500 nit or below). Typically for such a reference 8 bits of the screen are sufficient to display all gray values between approximately 500 nit and approximately 0.5 nit (ie with 1000:1 contrast range or below) in visually small steps where HDR images are encoded with a higher bit word, eg 10 bits (which is also captured by a camera with a greater depth and DAC, eg 14bits). In particular, HDR images typically contain many pixel values (of bright image objects) above a scene white. In particular, several pixels are lighter than twice the white of the scene. This scene white can typically be equalized with the NTSC/MPEG-2 reference screen white.
[0095] The number of bits used for HDR X images may typically be greater than or equal to the number of Y bits used for LDR images (X may typically be, for example, 10 or 12, or 14 bits (per color channel if several of the channels are used), and Y can, for example, be 8, or 10 bits). A transformation/mapping may be required to adjust pixels to a smaller range, eg a compressive scale. Typically, a non-linear transformation may be involved, for example, a logarithmic encoding may encode (such as lumas) a far distant luminance range in an X-bit word than a linear encoding, unless the luminance difference steps of one value to the next are then not equidistant, but neither are they necessary to be the human visual system.
[0096] It should be noted that the difference between LDR images and HDR images is not merely that a greater number of bits are used for HDR images than for LDR images. Also, HDR images cover a greater luminance range than LDR images and typically have a higher maximum luminance value, ie, a higher white point. In fact, where LDR images have a maximum luminance (white) point corresponding to no more than 500 nits, HDR images have a maximum luminance (white) point corresponding to more than 500 nits, and generally not more than 1000 nits, 2000 nits or even 4000 nits or higher. Thus, an HDR image does not merely use more bits corresponding to a higher granularity or improved quantization, but corresponds to a larger true luminance range. Thus, the brightest possible pixel value generally corresponds to an emitted luminance/light that is higher for an HDR image than for an LDR image. In fact, HDR and LDR images can use the same number of bits, but with the HDR image values being referenced to a higher luminance/maximum luminance dynamic range lighter than the LDR image values (and so with HDR images being represented with a coarser quantization on a luminance scale).
[0097] Ideally, the content provided by the content provider apparatus 101 will be captured and encoded with reference to a luminance range that corresponds to the luminance range of the screen 107. However, in practical systems the content can be interpreted across multiple screens different with many different characteristics, and/or can be encoded according to standards that are based on luminance ranges that differ from the luminance range of the specific screen 107. Also, the content may not be originally captured by a video device. capture or approach that exactly matches the luminance range of the screen.
[0098] Of course, HDR support in a content system typically requires some transformation or conversion between different luminance ranges. For example, if an LDR image is received and is to be presented on an HDR screen, an LDR to HDR conversion must be performed. If an HDR image is received and is to be presented on an LDR screen, an HDR to LDR conversion must be performed. These conversions are typically more complex and are not merely related to a simple scale of the luminance ranges as such a scale would result in an image that would be perceived as an abnormal appearance. More complex transformations are typically used and these transformations are often referred to to use the term tone mapping.
[0099] In principle, these luminance transformations could be performed in three different places in the content distribution system.
[0100] One option is to perform it on the content provider apparatus 101. Typically, this can allow the same luminance transform operation to be distributed to multiple screens thus allowing a single transform to be used by many users. This can allow and justify complex, manual and resource tone mapping to be carried out, for example, by technicians in the subject of tone mapping. In fact, this can provide a subjectively optimized image for a given luminance range, often referred to as artistic tone mapping. However, such an approach is resource intensive and is not feasible for application to many screens. Furthermore, a separate image stream is required for each supported luminance range resulting in a very high communication resource being required which is impractical for many systems.
[0101] Another option is to perform the luminance transform in the image processing device 103. However, as the general user is not technical in luminance transforms and since the necessary effect interprets it as impractical to perform the manual adaptation ( especially for moving images such as video clips, movies etc.), the transformation should preferably be automatic. However, these transforms cannot conventionally provide ideal images. In particular, the ideal transform may depend on the specific type of content, the intended characteristics of the image (for example, different transforms may be appropriate for a scene intended to be dark and menacing and a scene that is only intended to be dark to indicate a scene night). Or a different transformation can be applied to drawings, or newspapers. Furthermore, the originator of the content may be concerned about the potential impact of these automatic transforms and may be reluctant to lose control over how content can be presented in different scenarios. Also, the optimal transform will typically depend on the exact characteristics of the screen 107 and a transform based on an assumed, nominal, or default screen will typically result in sub-ideal transforms.
[0102] The transform can possibly also be performed on screen 107.
[0103] In the system of Figure 1, the image processing device 103 comprises the functionality to perform a dynamic range luminance transform on an image (or set of images, such as a video sequence) received from the device. content processing 103 to increase its dynamic range. In particular, the image processing device 103 receives an image from the content provider apparatus 101 then processes the image to generate a higher dynamic range image. Specifically, the received image can be an LDR image that is converted to an HDR image by applying the dynamic range luminance transform to increase the dynamic range. The transformed image can then be output on screen 107 being an HDR screen thus resulting in the originally received LDR image being converted to an interpreted HDR image. A dynamic range transform can map luminance values for (at least part of) an input image associated with a dynamic range to luminance values for (at least part) of an output image associated with a different dynamic range.
[0104] In another scenario, the image processing device 103 can receive an image from the content provider apparatus 101 and then processes the image to generate a lower dynamic range image. Specifically, the received image can be an HDR image that is converted to an LDR image by applying the dynamic range luminance transform to reduce the dynamic range. The transformed image can then be output to screen 107 being an LDR screen thus resulting in the originally received HDR image being converted to an interpreted LDR image.
[0105] In the system of Figure 1, the dynamic range transform is adapted depending on the information received from the content provider apparatus 101 and/or the screen 107. Thus, in the system, the dynamic range transform is not merely a locally operation performed in the image processing device 103, but may also be dependent on the characteristics, properties or information of the content provider apparatus 101 and/or the screen 107.
[0106] First the system of Figure 1 will be described with reference to a situation where the dynamic range transform is based on information provided to the image processing device 103 of the content provider apparatus 101.
[0107] Figure 2 illustrates an example of elements of the image processing device 103 of Figure 1.
[0108] The image processing device 103 comprises a receiver 201 which receives an image signal from the content provider apparatus 101. The image signal comprises one or more encoded images. In many scenarios the image signal can be a video signal comprising an encoded video sequence, i.e. a sequence of images. It will be appreciated that any suitable encoding of the image(s) may be used including, for example, JPEG image encoding, MPEG video encoding, etc. The encoded image is represented by pixel values which for each pixel in the image represent the corresponding emitted light for the pixel (or for individual subpixel of the color channel). Pixel values can be provided according to any suitable color representation such as RGB, YUV etc.
[0109] The image signal further comprises the target screen reference which is indicative of a dynamic range of a target screen to which the first encoded image is encoded. Thus, the target screen reference provides a reference to the encoded image that reflects the dynamic range for which a received image was constructed. The target screen reference may indicate the luminances for which the tone mapping in the content provider apparatus 101 has been designed, and specifically optimized.
[0110] The content provider apparatus 101 is then arranged to generate an image signal that not only includes the encoded image itself, but also a target screen reference that represents the dynamic range of the screen for which the encoded signal was generated. The content provider apparatus 101 can specifically receive the encoded image from an internal or external source. For example, the image can be provided as a result of a manual tone classification that optimizes the encoded image for a specific screen. In addition, the content provider apparatus 101 can obtain specific screen information that was used for the optimization, for example, through the screen information that was automatically communicated to the screen content provider apparatus 101 (for example, the screen provider apparatus. content 101 may further include the functionality needed to support manual tone mapping and may be connected to the target/reference screen used for this tone mapping). As another example, the encoded tone-mapped image can be received on a medium in which the associated screen properties are also stored. As yet another example, the content provider apparatus 101 may receive the information of a feature of the target screen by manual user input.
[0111] The content provider apparatus 101 can in response to this information generate an image signal comprising both the encoded image and the target screen reference that indicates a dynamic range of the target screen that was used for tone mapping. For example, a data value corresponding to an identification of a white point luminance and optionally an Electro-Optical Transfer Function corresponding to that of the target screen can be included in the image signal by the content provider apparatus 101.
[0112] The image processing device 103 further comprises a dynamic range processor 203 that applies the dynamic range transform on the encoded received image to generate an output image with a higher dynamic range, i.e., corresponding to a range higher of output luminances when the image is interpreted. Specifically, the encoded input image can be an image that is encoded to an LDR screen with a maximum luminance white point of 500 nits and this can be transformed into an output HDR image with a maximum luminance white point of, for example , 1000 or 2000 nits. Typically, the dynamic range transform can still increase the number of bits used to represent each value, but it will be noted that this is not essential and that in some embodiments the same number of bits (or even a few bits) can be used for the image output than for the input image. As another example, the input encoded image can be an image that is encoded to an HDR screen with a maximum white point luminance of 2000 nits and this can be turned into an output LDR image with a maximum white point luminance of, for example 500 nits. This dynamic range reduction transform may further include a reduction in the number of bits used for pixel values.
[0113] The dynamic range transform is performed in response to the target screen reference and thus can be adapted to consider not only the desired output luminance range, but also the luminance range for which the received image was encoded. For example, the system can adapt the dynamic range transform so that a transform generating an output image for 1000 nits will be different depending on whether the input image is generated for an image of 300 nits or 500 nits. This can result in a substantially improved output image.
[0114] In fact, in some embodiments the input image itself may be an HDR image, such as a 1000 nit image. The ideal transformation of such an image into respectively a 2000 nit image and a 5000 nit image will typically be different and the provision of a target screen reference may allow the image processing device 103 to optimize the dynamic range transform for the situation specific, thus providing a substantially improved image for the specific characteristics of the screen. In fact, if the screen is a 500-nit screen, the dynamic range transform must perform dynamic range compression in addition to expansion.
[0115] The approaches can be particularly advantageous in non-homogeneous content distribution systems, for example, which is very perceived for future television systems. In fact the (maximum) clarity of HDR LCD/LED TVs is currently rapidly increasing and in the near future, screens with a wide range of (maximum) clarity are expected to coexist in the market. Brighter photos look better on the TV screen, and a brighter TV sells better in the store. On the other hand, “reduced capacity” screens in notebooks, tablets and smartphones are also becoming very popular and are also used for the interpretation of content, for example, on TV.
[0116] Whereas screen brightness (and typically the electro-optical transfer function that specifies how a screen converts input pixel trigger values (color) into light values that then provide a particular psychovisual impression to the viewer) is no longer known on the content generation side (and which is still generally different from the reference monitor on which the content was targeted/rated), it becomes challenging to provide the best/optimal picture quality on screen. Furthermore, where some variations in screen brightness may have existed in the past, this variation was relatively minor and the assumption of a fixed known brightness did not introduce significant degradations (and could generally be compensated for manually to be a user, for example by setting the brightness and/or contrast of a screen).
[0117] However, due to the substantial increase in the variety of screens (smartphones, tablets, laptops, PC monitors, CRT screens, traditional LCD TV screens and clear screen HDRs), the characteristics (especially the brightness and contrast) of the screens used for interpretation exhibit enormous variation. For example, the maximum luminance contrast of prior art high-end display systems is continually increasing and new prototype displays have been developed with a maximum luminance as high as 5000 cd/m2 and contrast indices of 5-6 magnitude. On the other hand, screens being used, for example, in smartphones and tablets, are becoming more and more popular, but they have relatively low performance characteristics.
[0118] As per the aforementioned content content, such as video for movies, etc., is processed in content creation to provide the desired interpreted images. For example, when a movie is broadcast for general distribution (such as by DVD or Blu-rayTM) the producers/studio typically adapt and standardize the images for optimal appearance on a specific screen. This process is commonly referred to as color classification and tone mapping. Tone mapping can be thought of as a non-linear mapping of a luma value of an input pixel to a luma value of an output pixel. Tone mapping is performed to match the video to screen characteristics, viewing conditions, and subjective preferences. In the case of local tone mapping, processing varies depending on the pixel position within an image. In the case of global tone mapping, the same processing is applied to all pixels.
[0119] For example, when converting content to be suitable for general consumer distribution, tone mapping is usually performed to provide a desired output on a standard LDR screen. This can be done manually by color grading experts who balance many aspects of photo quality to create the desired 'look' for the plot. This can involve balancing local and global contrasts, sometimes even clipping pixels. Thus, typically tone mapping at this stage is not merely a simple automated conversion, but is typically a manual, subjective, and generally artistic conversion.
[0120] If the content was rated for an HDR target screen still than for an LDR target screen, the tone mapping result would typically be very different. Thus, by merely interpreting video content encoded to an LDR screen on an HDR screen, the resulting images will differ substantially from the ideal image. Similarly, if an optimized HDR image is merely interpreted on an LDR screen, a significant perceived image quality reduction can occur.
[0121] This question is in the system of Figure 1 directed by the dynamic range transform being performed in the image processing device 103, but based on the information preferably received from the content provider apparatus 101 and the screen 107. Thus, the dynamic range transform (specifically a tone mapping algorithm) can be adapted to consider the characteristics of the tone mapping that was performed in the content provider apparatus 101 and in the luminance range of the specific screen 107. Specifically, the tone mapping is performed in the image processing device 103 may be dependent on the target screen on which tone mapping is performed on the content generation side.
[0122] The content provider apparatus 101 provides a target screen reference to the image processing device 103 (either separately or integrated into the encoded image, i.e. the image signal can be made up of two separate data communications). The target screen reference may specifically include or be a white point luminance of the target screen.
[0123] For example, for a system with relatively low complexity, the content provider apparatus 101 can simply transmit an indication of the white point luminance of the target screen for each encoded image (video) that has been encoded. For example, data can be communicated indicating the number of nits available on the target screen. The dynamic range transform can then adapt the transform based on the number of nits. For example, if the image processing device 103 is performing a dynamic range transform to generate an output image for a 2000 nit screen, knowing whether the input image is tone-mapped on a 500 nit screen or a 1000 bits can be used to optimize the dynamic range transform performed in the image processing device 103. In both scenarios, the dynamic range transform can apply a non-linear transform, but this non-linear transform can have different characteristics for the two scenarios, namely, dependent on the white point of the target screen used for the tone mapping on the content provision side.
[0124] For example, the following mapping between the tone-mapped LDR image received image pixels for a 500 nit target screen and the HDR image pixels output for a 2000 nit end user screen can be performed: 0-200 nits ^0-200 nits 200-300 nits ^200-600 nits 300-400 nits ^600-1000 nits 400-500 nits ^1000-2000 nits However, for a 1000-nit target screen, the following mapping can still be performed:
However, for a 1000 nits target screen, the following mapping can still be performed:

[0125] Thus, in terms of the relative values (percentage of complete mapping), the two different mappings can be as shown in Figure 3 where the relationship between the percentage of white level for the input image on the x-axis with respect to the white level percentage for the output image on the y-axis is shown for respectively a 301 nits target screen and a 1000 nits target screen. In the example, two very different non-linear tone mappings are applied to the same user screen depending on which reference target screen was used/assumed on the content provision side.
[0126] It will be observed that the same mappings can be used for mapping an optimized image of 2000 nits into an optimized image of 500 or 1000 nits by changing the axes (corresponding to an application of an inverse mapping of this described above). It will also be noted that the mapping, for example, on an optimized image of 500 nits can be adapted depending on whether the input image is an optimized image of 1000, 2000 or 4000 nits.
[0127] In some embodiments, the target screen reference may additionally or alternatively comprise an indication of the Electro-Optical Transfer Function for the target screen. For example, a range indication for the target screen can be included.
[0128] The Electro-Optical Transfer Function (EOTF) of a screen describes the relationship between the input (trigger) luma value (Y’) and output luminance (Y) for the screen. This conversion function depends on many screen characteristics. Also, user adjustments such as brightness and contrast can have a big influence on this function. Figure 4 illustrates a typical example of an EOTF for an 8-bit input value (level 256).
[0129] The communication of an EOTF from the target screen can provide an advantageous characterization of the target or reference screen used to generate the encoded image or video. This characterization can then be used in the image processing device 103 to adapt the dynamic range transform to the differences between the characteristics of the target screen and the end-user screen. For example, the dynamic range transform can include an offset that inverts an index between the target/reference screen EOTFs and the end-user screen.
[0130] It will be noted that there are many ways to characterize an EOTF. One possibility is to provide a set of EOTF sample values. The image processing device 103 can then interpolate between the sample points, for example, using simple linear interpolation. Another possibility is to provide a specific model of the screen's grayscale/contrast behavior over at least a portion of the screen's range. As another example, the content provider apparatus 101 can communicate a specific mathematical function that characterizes the EOTF. In some scenarios, a set of target screens can be predefined with the associated model/function parameters being stored locally in the image processing device 103. In this case, the content provider apparatus 101 can only communicate the identification code of the target screen to the image processing device 103...
[0131] As yet another example, an underlying mathematical function can be predetermined and the indication of the target screen can comprise parameters to adapt the predefined function to describe the EOTF of the specific target screen. For example, the EOTF can be characterized by a gamma function as used for conventional screens, and the target screen indication can provide a specific gamma for the target screen.
[0132] In many systems, the indication of the target screen may comprise or consist of a maximum luminance and a range of the target screen. Thus, specifically, the characterization of the EOTF can be provided by two values, namely, the gamma and the white point/maximum luminance. The following descriptions will focus on this scenario.
[0133] The description will also focus on the realizations where the distribution system conforms to the Blu-rayTM standard. Blu-rayTM is a distribution in Audio/Video/Data family formats based on optical disc technology. BD-ROMTM is the acronym for the read-only format of Blu-ray Disc. This format is predominantly used for video distribution with high definition (2D and 3D) and high audio quality.
[0134] A BD-ROMTM player features two modes of operation: HDMV and BD-J. At any point in time the player is either an HDMV or BD-J mode. Profile 5 Blu-rayTM players feature stereoscopic 3D Video/Graphics interpretation close to standard 2D Video/Graphics interpretation. As an example, Figure 5 shows the model for presentation plans in HDMV-2D mode.
[0135] As a specific example of the system of Fig. 1, the image signal can be a BDROM™ encoded video signal, and thus the image processing device 103 can specifically be a Blu-ray™ player. The encoded video can be the primary or optionally the secondary video content on the disc. The primary video is typically the actual film in 2D or possibly stereoscopic 3D format.
[0136] To achieve the ideal quality of the photo in the BDROMTM system, the system in figure 1 uses an increase in the BDROMTM specification that allows the transmission of a parameter from the target screen. These data, with “the assumed or real information from the end user's screen, are then used by the BDROMTM player to perform the dynamic range transform. Specifically, the BDROMTM player (the image processing device 103) can perform additional video tone mapping or other processing dependent on the characteristics of the target screen and/or the end-user screen.
[0137] One option for transmitting the information in the target screen parameters is by embedding the data indicative of these parameter values in the BDROMTM data on the disk. An extension data structure in the playlist file (xxxxx.mpls) can be used for this. This structure of the extension data will have a unique and new identification. Incompatible legacy BDROMTM players will be ignorant of this new data structure and merely ignore it. This will ensure backward compatibility. An

[0138] In this example, Abs_Max_Luminance is a parameter with a value, for example, between 0 and 255 that indicates the maximum absolute luminance/white point of the target screen according to:

[0139] It will be noted that other amounts of bits for mantissa or exponent can certainly be used.
[0140] Gamma is a parameter with a value, for example, between 0 and 255 that indicates the target screen gamma according to: EOTF target screen gamma = Gamma/25.
[0141] Thus, in this example, a target screen reference is provided to the image processing device 103 by the BDROMTM including an absolute maximum luminance and a gamma value for the target screen on which the video signal was generated. The image processing device 103 then uses this information by performing an automatic dynamic range transform to increase or decrease the dynamic range of the video signal for an end-user screen of the highest/lowest luminance.
[0142] It will be noted that many different dynamic range transforms are possible and that many different ways to adapt these dynamic range transforms based on target screen references can be used. Below, several examples are provided, but it will be noted that other approaches can be used in other embodiments.
[0143] First, the difference in the ideal mapping of a given original image in respectively an LDR image and an HDR image can be illustrated by Figure 6 which shows an example of the different tone mappings that can be used for an LDR screen (bottom part of the figure) and an HDR screen (top of the figure). The original image is the same for both LDR and HDR. The histogram for this image is shown on the left of Figure 6. It shows that most pixels have luma values in the lower middle range. The histogram also shows a second, small peak at high luma values (eg car headlights or a flashlight).
[0144] In this example, the tone mapping is represented by the three successive processing steps:
[0145] Trim: Mapping the luma values into the high and low range and a limited number of output luma values.
[0146] Expansion: Adapt the dynamic range to the desired luma dynamic range.
[0147] Clarity: adapt the average luminance level for optimal clarity.
[0148] In the case of LDR, the luma range is mapped to a luminance range of an LDR screen. The dynamic range of the original image is much larger and so the original image is severely cropped to accommodate the limited dynamic range of the screen.
[0149] In the case of HDR (top of figure) the clipping may be less severe as the dynamic range of the screen is an order of magnitude greater than for the LDR screen.
[0150] Figure 6 shows the histogram after each processing step as well as the histogram of the image shown in the LDR screen and the HDR screen respectively. In particular, the far right histograms illustrate the tone-mapped LDR image when displayed on an HDR screen and vice versa. In the first case the image will be too bright and the high and low range luma values will lose too much detail. In the second case the image will be too dark and midrange luma values will lose much more detail and contrast.
[0151] As can be seen, merely presenting an LDR optimized (luminance scale version) image on an HDR screen (or vice versa) can substantially reduce the image quality, and thus the image processing device 103 can perform a dynamic range transform to increase image quality. Furthermore, since the optimization performed in the studio is highly dependent on the characteristics of the screen on which the optimization was performed, the ideal dynamic range transform to be performed by the image processing device 103 does not merely depend on the end user's screen, but also depends on the reference screen. Of course, the target screen reference provided to the image processing device 103 allows the image processing device 103 to perform the desired dynamic range transform not merely based on the assumed or known characteristics of the end user's screen, but also based on on the actual screen used on the content provider side. In fact, it can be considered that the provision of the target screen reference allows the image processing device 103 to partially or completely reverse some of the tone mapping performed in the studio thus allowing the estimation of the characteristics of the original image. Based on this estimate, the image processing device 103 can then apply an optimized target tone mapping to the specific dynamic range characteristics of the end user's HDR screen.
[0152] It will be noted that the image processing device 103 typically does not seek to perform a specific inverse tone mapping to recreate the original signal followed by a suitable tone mapping for the specific end-user screen. In fact, typically the dynamic range transform will not provide enough information to perform this inverse pitch mapping and the pitch mapping performed by the content provider can generally be partially irreversible. However, the image processing device 103 can perform a dynamic range transform that seeks to adapt the received image by the dynamic range transform providing a result that may be a (possibly very coarse) approximation of the more theoretical operation of an inverse tone mapping to generate the original image followed by a tone mapping of the original image optimized to the specific desired dynamic range. Thus, the image processing device 103 can simply apply, for example, a simple mapping of luma values from the input in the dynamic range transform to the appropriate luma values at the output of the transform. However, this mapping not only reflects the desired original image tone mapping for the given end-user screen, but also depends on the actual tone mapping already performed in the content provider apparatus 101. Thus, the image processing device 103 can use the dynamic range transform to adapt the applied transform to consider and adapt to the tone mapping that has been performed.
[0153] As an example, the image processing device 103 can be arranged to provide an output image to display in an HDR image with a predetermined maximum luminance (such as 4000 nits). The image/video received can be tone-mapped to a 500-nit LDR screen. This tone mapping then optimized the image for a given maximum luminance and gamma. As a specific example, the gamma function might look like curve 701 from figure 7 and the resulting image when presented on a 500 nit screen might look like figure 8.
[0154] When this image is to be presented on an HDR screen of, for example, 4000 nits, it is generally desirable that the light emitted to dark areas does not change substantially where the light emitted to bright areas must be increased very substantially. Thus, a very different relationship between (linear) luminance values and actual trigger values is required. Specifically, a substantially improved image would have been generated for an HDR image if the mapping curve 703 of Fig. 7 was used, that is, if a higher gamma was applied in the content-side tone mapping. However, this higher mapping will result in a 500 nit screen on images that appear too dark as illustrated in Figure 9.
[0155] In the system, the image processing device 103 is informed of the gamma value for the target screen on the content side, and can thus derive the curve 701. In addition, the desired curve 703 is known as it depends on the range dynamics of the screen for which an output image is generated (which, for example, can be provided to the image processing device 103 of the screen 107 or can be assumed/predetermined). Thus, the image processing device 103 can apply a transformation on each pixel to the luminance value corresponding to the conversion of curve 701 to curve 703. Thus, the image processing device 103 can thus proceed to use the provided target screen reference. of the content provider apparatus 101 for applying a dynamic range transform that converts the generated output signal from one suitable for an LDR screen to one suitable for an HDR screen.
[0156] It will be noted that the same considerations can apply when performing the dynamic range transform to reduce the dynamic range. For example, if the received content is displayed in a low quality, low luminance screen such as a cell phone screen, the preferred gamma for the mapping curve may be as indicated by curve 705 in Figure 7, ie a gamma smaller than one may be preferred. When presented on a normal 500 nit LDR, a corresponding image would look very bright and with very little contrast as indicated by figure 10, and still the scenario would be worse for an HDR screen.
[0157] Thus, if the image processing device 103 is to generate an image for this low-light screen, it can proceed to perform a dynamic range transform that reduces the dynamic range by adjusting the luminance values for the differences in the range between the curve 701 and 705.
[0158] As another example, if the content provider apparatus 101 provides an image directed to a low brightness screen/dynamic range and certainly an image that is encoded according to the curve 705, the image processing device 103 may use the knowledge of this range provided by the dynamic range transform to transform the received values into values suitable for both a 500 nits screen adapting the difference between the 705 and 701 curves, and for the 4000 nits screen adapting the difference between the 705 and curves 703.
[0159] Thus, the provision of a dynamic range transform indicating a maximum luminance/white point luminance and an assumed gamma value for the target screen allows the image processing device 103 to convert the received image into a gamma value suitable for the screen luminance value of the specific brightness at which the image is to be interpreted.
[0160] In some systems, the target screen reference may comprise a tone mapping indication that represents a tone mapping used to generate the first stream of encoded video for the first target screen.
[0161] In some systems, the target screen reference can directly provide the information of some of the specific tone mapping that was performed on the content provider side. For example, the target screen reference may include information defining the white point luminance and gamma on which the LDR (or HDR) image was generated, ie the screen on which the tone mapping was performed. However, in addition, the target screen reference can provide some specific information which, for example, defines some of the information lost in the tone mapping that was performed on the content provider side.
[0162] For example, in the example of Fig. 6, a tone-mapped LDR image corresponding to the cropped image can be received by the image processing device 103. The image processing device 103 can apply a dynamic range transform that maps this in the appropriate dynamic range and non-linear ratio based on target screen gamma and white point information. However, to provide an improved fit, the severe clipping translated into a less severe clipping (or even in some scenarios to clipping). Of course, the content provider apparatus 101 can provide additional information identifying the specific clipping that has been performed for the LDR image by the content provider thus allowing the clipping to be partially or completely reversed. For example, the dynamic range transform can define the range that has been clipped and the image processing device 103 can of course distribute the clipped values over that range according to a suitable algorithm (for example, identifying an area of clipped values (like an explosion) and generating increasing brightness towards the center of that area).
[0163] The dynamic range transform can additionally or alternatively provide the information that defines an additional tone mapping that was performed on the content provider side. For example, relatively standard tone mapping can be performed for most images in a movie or other video sequence. The image processing device 103 can, based on the gamma and white point luminance, convert such a tone-mapped image into an image of the desired dynamic range (highest and lowest) using a dynamic range transform that assumes a mapping of the default tone on the content provider side. However, for some images the content provider may have performed a dedicated and subjective tone mapping. For example, the color sorter might want a specific artistic effect or quality for some images, such as a fine gradation or color cast for dark images of a tense situation (in a horror movie) or a specific effect for dreaming. as scenes. Such hue mapping can be characterized by the data in the target screen reference thus allowing the image processing device 103 to adapt the dynamic range transform to the specific hue mapping that has been applied.
[0164] So, specifically, in some scenarios the additional/modified tone mapping is performed on the content provider side to generate a specific appearance so that the image is modified with respect to what would be expected by a fixed adaptation to the behavior nude electro-optic of the target screen. The data provided by the content provider apparatus 101 can specify a desired appearance compared to the reference screen and this can by the image processing device 103 be used to actually generate the desired optical behavior considering all factors (eg where a blind coding in the input signal could accidentally end up below the shrouded reflective light so that it can no longer be compensated according to the behavior of the encoded content provider side).
[0165] As an example, if it is known that the target screen's gamma is low for the darkest values, it is for such (reference) screen it is possible to adjust the appearance of, say, horror scenes. For example, the image can be offset by an extra luminance boost so that the image still looks darker, but at least with some structure of the object still visible.
[0166] As an example, with “the gamma and white point luminance of the reference target, the color classifier on the content provision side can provide some (additional) information about the artistic impression of certain regions and/or images. For example, for a given EOTF, the content provider may indicate that a particular area is desired to have increased clarity for better visibility, or reduced contrast to provide a hazy appearance, etc. Thus, with "an EOTF (for example, represented by gamma and white point luminance) the target screen reference can indicate limits of a partial/local screen luminance range and provide dynamic range transform data that provides more accurate information on their preferred allocation of gray levels.
[0167] In some embodiments, the dynamic range processor (203) may be arranged to select between generating the output image as the received encoded image and generating the output image as a transformed image of the first encoded image in response to the reference of the target screen.
[0168] Specifically, if the white point luminance indicated by the target screen reference is sufficiently close to the white point luminance of the end user's screen, the dynamic range transform may simply consist of not performing any processing on the received encoded image, or that is, the input image can simply be used as the output image. However, if the white point luminance indicated by the target screen reference is different from the white point luminance of the end user's screen, the dynamic range transform can modify the received image according to proper received image mapping of the pixels in the image output pixel. In these cases, the mapping can be adapted depending on the target screen reference. In another example, one or more predetermined mappings can be used.
[0169] For example, the image processing device 103 may include a first predetermined mapping that has been determined to provide an output image suitable for a doubling in the white point luminance and a second predetermined mapping that has been determined to provide a output image suitable for split white level point luminance. In this example, the image processing device 103 can select between the first mapping, the second mapping, and a unit-dependent mapping of the white point luminance of the reference target screen and the white point of the end-user screen. The image processing device 103 can specifically select the mapping that most closely matches the index between the white point luminance reference target screen and the white point luminance end-user screen...
[0170] For example, if an input image is received with a reference target screen indicating that it has been optimized for a 500 nit screen and the end user screen is a 1000 nit screen, the image processing device 103 will select the first mapping. If still, the reference target screen indicates that the input image has been optimized for a 1000 nits screen, the image processing device 103 will select the unit mapping (ie use the input image directly). If the reference target screen indicates that it has been optimized for a 2000 nit screen, the image processing device 103 will select the second mapping.
[0171] If intermediate values for the white point luminance of the target screen are received, the image processing device 103 can select the mapping closest to the index between the white point luminances, or it can, for example, interpolate between the mappings .
[0172] In some embodiments, the dynamic range transform may comprise or consist of a gamma transform. Thus, in some embodiments, dynamic range processor 203 can modify the chromaticities of the interpreted image depending on the target screen of reference. For example, when a received HDR image is interpreted on an LDR screen, compression can result in a smoother image with few variations and gradations in individual image objects. Dynamic range transform can compensate for these reductions by increasing chroma variations. For example, when an image with an intensely light apple is optimized for rendering on an HDR screen, rendering on an LDR screen with reduced dynamic range will typically make the apple appear less prominent and appear less clear and opaque. This can by the dynamic range transform be compensated by making the apple color more saturated. As another example, texture variations can become less perceptually significant due to reduced luminance variations and this can be compensated for by increasing texture chroma variations.
[0173] In some systems, the video signal may comprise a data field that includes the dynamic range control data transform and the dynamic range processor 203 can adapt the dynamic range transform in response to this control data. This can be used by the content owner/provider to retain at least some input or control over the interpretation of the content provided.
[0174] The control data can, for example, define an operation or dynamic range transform parameter that should be applied, can be applied, or that is recommended to be applied. Control data can be further differentiated for different end-user screens. For example, individual control data can be provided for a plurality of possible end-user screens, as one set of data for a 500 nit screen, another set for a 1000 nit screen, another set for a 2000 nit screen , and yet another set for a 4000 nit screen.
[0175] As an example, the content creator can specify that tone mapping should be performed by dynamic range processor 203 depending on the characteristics of the end user's screen as illustrated in figure 11. In the example, the control data can specify a mapping for each of the three areas corresponding to given values of the maximum screen luminance (x-axis) and the incident of light on the screen (and thus the screen reflections - y-axis).
[0176] Thus, in the specific example, mapping 1 is used to display low brightness in low ambient light environments. Mapping 1 can simply be a unit mapping, ie the received LDR image can be used directly. For a high maximum luminance (HDR) screen in a relatively dark environment (low screen reflections), mapping 2 can be used. Mapping 2 can perform a mapping that extends the luminances of the bright LDR image still while substantially maintaining the intensity for the darker segments. For a high maximum luminance (HDR) screen in a relatively bright environment (substantial screen reflections), mapping 3 can be used. Mapping 3 can perform more aggressive mapping that not only extends the luminances of the bright LDR image, but also brightens and increases contrast for the darker image areas.
[0177] In some scenarios, control data may specify boundaries between mappings with the mappings being predetermined (eg, standardized or known on both the content provider side and the interpreted side). In some scenarios, the control data may further define elements of different mappings or may specify the mappings precisely, for example using a gamma value or specifying a specific transformation function.
[0178] In some embodiments, the dynamic range control data transform can directly and explicitly specify the dynamic range transform that must be performed to transform the received image into an image with a different dynamic range. For example, control data can specify a direct mapping of input image values to output image values for a white dot range of the target output screen. The mapping can be provided as a simple parameter allowing the appropriate transform to be performed by the dynamic range processor 203 or detailed data can be provided as a specific look-up table or mathematical function.
[0179] As an example of low complexity, the dynamic range transform can simply apply a linear function in segments to the input values of an LDR image to generate improved HDR values. In fact, in many scenarios, a simple mapping consisting of two linear relationships as illustrated in Figure 12 can be used. The mapping shows a direct mapping between input pixel values and output pixel values (or in some scenarios the mapping may reflect a (possibly continuous) mapping between input pixel luminances and output pixel luminances). It will be noted that the same mapping can be used to map an input HDR image to an output LDR image.
[0180] Specifically, for an LDR to HDR mapping, the approach provides a dynamic range transform that maintains the dark areas of an image to keep dark while at the same time allowing the substantially high dynamic range to be used to provide an interpretation much clearer of the bright areas, as well as an improved and sportier looking middle range. For an HDR-to-LDR mapping, the approach provides a dynamic range transform that keeps the dark areas of an image but compresses the lighter areas to reflect the reduced brightness range of the screen.
[0181] However, the exact transformation depends on the target screen for which an image was generated and the screen on which it is to be interpreted. For example, when interpreting an image for a 500 nit screen on a 1000 nit screen, a relatively modest transformation is required and the elongation of the bright areas is relatively limited. However, if the same image is displayed on a 5000-nit screen, a much more extreme transformation is needed to fully exploit the available light without over-brightening the dark areas.
[0182] Thus, the mapping may depend on the target screen on which the original image was generated. For example, if an input image optimized for 1000 nits is to be rendered on a 2000 nit screen, a relatively modest transformation is required and the elongation of the bright areas is relatively limited. However, if an image has been optimized for the 500 nits screen and is to be displayed on a 2000 nits screen, a much more extreme transformation is needed to fully exploit the available light without lightening the dark areas too much. Figure 13 illustrates how two different mappings can be used for respectively an input image 1000 nits (curve 1301, maximum value of 255 corresponding to 1000 nits) and an input image 1000 nits (curve 1303 maximum value of 255 corresponding to 500 nits) ) to the screen in an input LDR image of 2000 nits (maximum value of 255 corresponding to 2000 nits).
[0183] An advantage of this simple relationship is that the desired tone mapping can be communicated at very low expense. In fact, the control data can specify the knee of the curve, that is, the transition point between the two linear parts. Thus, a data value with simply two components can specify the desired tone mapping to be performed by the image processing device 103 to different screens. The image processing device 103 can further determine suitable values for other maximum luminance values by interpolating between the values provided.
[0184] In some implementations, more points can, for example, be provided to define a curve that is still linear pieces, but with more linear intervals. This can allow for more accurate tone mapping and improve the resulting image quality while introducing only relatively less suspension.
[0185] In many implementations, the control data may not specify a specific tone mapping that must be performed, but still provide data that defines limits within which the dynamic range transform/tone mapping can be freely adapted by the device of image processing 103.
[0186] For example, still specifying a specific transition point for the curves of figures 12 and 13, the control data can define limits for the transition point (with possibly different limits being provided for different levels of maximum brightness). Thus, the image processing device 103 can individually determine the desired parameters for the dynamic range transform so that this can be set to provide the preferred transition for the specific screen considering, for example, the user's specific preferences. However, at the same time the content provider can ensure that this freedom is restricted to an acceptable range thus allowing the content provider to retain some control over how the content is interpreted.
[0187] Thus, the dynamic range control data transform may include data that define transform parameters that must be applied by the dynamic range transform performed by the dynamic range processor 203 and/or that define the limits for the transform parameters . The control data can provide this information for a range of maximum brightness levels thus allowing the adaptation of the dynamic range transform on different end-user screens. In addition, for maximum light levels not explicitly included in the control data, appropriate data values can be generated from the available data values, for example, by interpolation. For example, if a knee point between two linear parts is indicated for a 2000 nit screen and a 4000 nit end user screen, a suitable value for a 3000 nit screen can be found by simple interpolation (eg by a simple average in the specific example).
[0188] It will be noted that many different and varied approaches to both the dynamic range transform and to restrict, adapt and control it from the content provider side by the additional control data can be used in different systems depending on the specific preferences and requirements of the individual application.
[0189] In fact, many different commands or parameter values can be provided in the control data to generate tone mappings according to the content provider's preferences.
[0190] For example, in low complexity systems, a simple dynamic range transform can be applied and the content provider apparatus 101 can simply provide a white level and black level for the target screen which is then used by the processor. dynamic range 203 to determine the tone mapping to apply. In some systems a tone mapping function (gamma or otherwise) may be provided as mandatory for mapping at least a range of the input image. For example, the control data can specify these middle/darkest bands to be interpreted according to a given mapping while allowing lighter bands to have bands to be freely mapped by the image processing device 103.
[0191] In some scenarios, the control data may merely provide a suitable mapping suggestion that can be applied, for example, in the midrange area. In this case, the content provider can thus assist the image processing device 103 by providing suggested dynamic range transform parameters that have been found (for example, through manual optimization by the content provider) to provide a high image quality when viewed on a given HDR screen. The image processing device 103 can advantageously utilize this, but is free to modify the mapping, for example, to accommodate individual user preferences.
[0192] In many scenarios the mapping is at least partially performed based on control data that will represent a functional relationship of relatively low complexity, such as a gamma mapping, S curve, combined mapping defined by partial specifications for individual ranges etc. However, in some scenarios more complex mappings can certainly be used.
[0193] It will also be noted that the dynamic range transform can generally include an increase or decrease in the number of bits used to represent the values. For example, an 8-bit image can be transformed into a 12-bit or 14-bit image. In these cases, control data from the content provider apparatus 101 can be provided regardless of the quantization changed. For example, an 8-bit 8-bit encoded tone mapping ("shape" to gray sub-distribution) can be defined by the content provider apparatus 101 and the image processing device 103 can scale this in the specific white clarity of the screen considering the transformation into more bits.
[0194] In other embodiments or scenarios, the dynamic range transform may include a reduction in the number of bits used to represent the values. For example, a 12-bit image can be transformed into an 8-bit image. These scenarios can generally occur when a reduction in dynamic range is provided by the dynamic range transform, for example, when converting a 12-bit HDR image to be interpreted on an LDR screen of the 8-bit input value.
[0195] As mentioned, control data can provide mandatory or voluntary control data. In fact, the received data may include one or more fields that indicate whether the provided tone mapping parameters are mandatory, allowed or suggested.
[0196] For example, a suggested pitch mapping function can be provided with “an indication of how large a deviation from it can be accepted. An image processing device 103 in a default configuration can then automatically apply the suggested mapping. However, the transform can be modified, for example, to reflect the user's personal preferences. For example, a user input can change the settings of the image processing device 103, for example, so that dark areas of an image are interpreted lighter than considered ideal by the content provider. For example, a user can simply press a button to increase brightness, and the tone mapping can be changed correctly (for example, the lower linear section of the curves in figure 12 and 13 is moved up). The user can then input a tuning into the tone mapping. However, data on how acceptable the tuning is to the content provider can be included in the control data thus restricting the dynamic range transform to generate the output images that are still considered by the content provider to retain the integrity of the image being provided. . Control data can, for example, also specify the effect of user interactions, such as defining or limiting the change in brightness that occurs for each press of the button by a user.
[0197] The dynamic range transform certainly provides a dynamic range transform that is directed to provide an image that is appropriate for the specific end-user screen 107 while considering the display characteristics of the screen on which the input image is generated. Thus, the image processing device 103 generates an output signal that is associated with a given maximum luminance/brightness value, i.e., which is directed for interpretation on a screen with the white point/maximum luminance value. In some systems, the screen white point luminance may not be precisely known by the image processing device 103, and thus the output signal may be generated for an assumed white point luminance (e.g., manually entered by a user) . In other applications (as will be described later), the screen can provide the information on the white point luminance and the image processing device 103 can adapt the dynamic range transform based on that information.
[0198] If the white point luminance at which the output signal is generated matches exactly or sufficiently close to the white point luminance of one of the received images (according to any suitable criterion, as a difference the white point luminances being below a threshold), the image processing device 103 can proceed to use that image directly in the output image, i.e. the dynamic range transform can simply be a unit mapping. Also, if the white point output luminance does not directly correspond to a white point luminance of a received image, but matches a white point end-user screen luminance in which the explicit dynamic range control data transform was provided, this control data can be used directly to adapt the dynamic range transform. If the white point output luminance does not directly correspond with the white point luminance of a received image or with a white point luminance in which the dynamic range control data transform was provided, the tone mapping parameters provided by the control data for different white point luminances can be used to adapt the dynamic range transform in dependence on the white point output luminance. In particular, dynamic range processor 203 can interpolate between tone mapping parameters to other white point luminance values at the specific white point output luminance. In many embodiments, a simple linear interpolation will suffice, but it will be noted that many other approaches can be used.
[0199] In fact, the control data can, for example, also provide information on how the tone mapping parameters provided for different screen white point luminances should be processed to generate the tone mapping parameters for the specific white point output luminance. For example, the control data might indicate a non-linear interpolation function that should be used to generate the appropriate tone mapping parameters.
[0200] It will also be observed that the dynamic range transform is not necessarily constant for different images or even for the same image.
[0201] In fact, in many systems the dynamic range control data transform can be continuously updated thus allowing the dynamic range transform performed by the dynamic range processor 203 to be adapted to the current characteristics. This can allow different tone mappings to be used for dark images/dark scenes than for light images/scenes. This can provide improved performance. In fact, a time-controlled variable dynamic range transform in response to the dynamically updated dynamic range control data transform can be used to provide additional control to the content provider. For example, the interpretation of a dark scene can be different on an HDR screen depending on whether the scene is a tense scene intended to provide discomfort or if the scene is merely dark to match a night scene (in the first case the dark scene can be interpreted as dark on the HDR screen as on an LDR screen, and in the second case the dark scene can be interpreted somewhat lighter, thus exploiting the additional dynamic range to allow visually enhanced perceptible differentiation in dark areas).
[0202] The same considerations can be applied within an image. For example, a scene might correspond to a clear sky over a dark shaded ground (for example, a clear sky in the upper half of the image and a forest in the lower half of the image). The two areas can advantageously be mapped differently when mapping from LDR to HDR, and the dynamic range control data transform can specify the difference in these mappings. Thus, the dynamic range control data transform can include tone mapping parameters that change for different images and/or that are position dependent on the image.
[0203] As a specific example, at least some control data can be associated with a given image area, luminance range, and/or image range.
[0204] The dynamic range control data transform can be provided to the image processing device 103 according to any suitable communication approach or standard.
[0205] In the specific example of the communication between the content provider apparatus 101 and the image processing device 103 uses a Blu-rayTM media. Transmission of control commands to the dynamic range transform can be achieved by embedding these parameter values in the BDROM data on disk. An extension data structure in the playlist file (xxxxx.mpls) can be used for this. This structure of the extension data will have a unique and new identification. Legacy BDROM players will be ignorant of this new data structure and will simply ignore it. This will ensure 5 backwards compatibility. A possible implementation of the syntax and semantics of such an LHDR_descriptor is shown below. Syntax No. of Mnemonic bits

[0206] In this example the LHDR_descriptor contains three processing descriptors. These parameters specify additional video processing in case the target screen category is different from the end user screen category. As an example these parameters can have the following values. Video_Process_descriptor:
Video_Process_descriptor:

DR_Process_descriptor:
Level_Process_descriptor:

[0207] The previous example focused on the example where the signal received from the content provider apparatus 101 comprises only one version of the picture/video sequence, and specifically where the signal comprises only one LDR picture/video sequence.
[0208] However, in some systems and implementations, the content provider apparatus 101 can generate an image signal that comprises more than one version of the image. In these scenarios one image can be tone-mapped to a target screen and another image can correspond to the same original image, but tone-mapped to a different target screen. Specifically, one image can be an LDR image generated for, for example, a 500 nits screen and another image can be an HDR image generated for, for example, a 2000 nits screen.
[0209] In this example, the image signal may further comprise a second reference target screen, i.e. a reference target screen may be provided for each of the images thus indicating the screen characteristics on which the tone mapping on the side of the encoder have been optimized for the individual images. Specifically, a gamma and maximum clarity parameter can be provided for each image/video sequence.
[0210] In these systems, the image processing device 103 can be arranged to apply the dynamic range transform in response to the second reference target screen, and specifically considering both the first and second reference of the target screen...
[0211] The dynamic range transform can not only adapt the specific mapping or specific operation that is performed on an image, but it can also depend on the target screen references to select this image to use as the basis for the transformation. As an example of low complexity, dynamic range processor 203 can select between using the first and second image depending on how the associated reference target screen matches the white point luminance at which the output signal is generated. Specifically, the image associated with a white point luminance closest to the desired output white point luminance can be selected. Thus, if an output LDR image is generated, dynamic range transform can be performed on the encoded LDR image. However, if an HDR image with maximum brightness higher than the encoded HDR image is generated, dynamic range transform can be performed on the encoded HDR image.
[0212] If an image is to be generated for maximum brightness between the white point luminances of the encoded images (eg for a 1000 nits screen), the dynamic range transform can be based on both images. In particular, an interpolation between images can be performed. Such interpolation can be linear or non-linear and can be performed directly on the encoded images before the transformation or can be applied to the images after applying the transformation. The weighting of individual images can typically depend on how closely they are to the desired maximum brightness emitted.
[0213] For example, a first transformed image can be generated by applying a dynamic range transform to the first encoded image (the LDR image) and a second transformed image can be generated by applying a dynamic range transform to the second transformed image. The first and second transformed image are then combined (eg, summed) to generate the output image. The weights of respectively the first and second transformed image are determined by the near references to the target screen respectively of the first and second encoded image correspond with the desired maximum brightness emitted.
[0214] For example, for a 700 nits screen, the first transformed image may be weighted much higher than the second transformed image and for a 3000 nits screen the second transformed image may be weighted significantly higher than the first transformed image. For a 2000 nit screen the two transformed images can be passively weighted equally and the output values can be generated by an average of the values for each image.
[0215] As another example, the transformation can be performed selectably based on the first or second image for different areas of the image, for example, depending on the characteristics of the image.
[0216] For example, for relatively dark areas the dynamic range transform can be applied to the LDR image to generate pixel values that are suitable for a 1000 nits screen yet use the finest resolution that can be available for dark areas to the LDR image corresponding to the HDR image (eg if the same number of bits is used for both images). However, for the lighter areas pixel values can be generated by applying a dynamic range transform to the HDR image thus exploiting that this image will typically have more information in the high light ranges (specifically the loss of information due to clipping is typically much smaller for an HDR image relative to an LDR image).
[0217] Thus, when more than one image is received from the content provider apparatus 101 the image processing device 103 can generate the output image of one of these images or can combine them by generating an output image. The selection and/or combination of encoded images is based on the target reference screen provided for each image as well as the maximum brightness for which the output signal is generated.
[0218] It will be noted that in addition to combining and/or selecting the individual encoded images, the individual dynamic range transforms can also be adjusted and adapted in response to the dynamic range transform. For example, the previously described approaches can be applied individually to each dynamic range transform. Similarly, the dynamic range control data transform can be received and can be used to adapt and control each dynamic range transform as previously described. In addition, the dynamic range control data transform may contain information defining mandatory, optional or preferred/suggested parameters for the combination of processing of the first and second encoded images.
[0219] In some systems, the dynamic range control data transform comprises different transform control data for different image categories. Specifically, different types of images/content can be processed differently when performing dynamic range transform.
[0220] For example, different tone mappings can be defined or suggested for different types of video content. For example, a different dynamic range transform is defined for a cartoon, a horror movie, a football game, etc. The received video signal can in this case provide metadata describing the type of content (or content analysis can be applied locally in the image processing device 103) and apply the dynamic range transform appropriate for the specific content.
[0221] As another example, an interpreted image can be generated as a combination of superimposed images with different transforms being provided for the different images. For example, on Blu-rayTM a number of different presentation planes are defined (as illustrated in figure 5) and different dynamic range transforms can be applied to the different presentation planes.
[0222] The characteristics of each of these presentation plans are optimized by the content provider for a specific target screen. The viewing experience for the end user can be optimized by adapting the characteristics of the presentation plans to the end user's screen. Typically, the ideal fit will be different for different presentation plans.
[0223] With respect to tone mapping the situation in the current day's BDROM system is as follows:
[0224] - Video tone mapping (global and/or local) is performed in the studio using a studio monitor.
[0225] - Graphics Tone Mapping (generally different from Video Tone Mapping) is performed in the studio using a studio monitor.
[0226] - OSD tone mapping is performed on the BDROM player.
[0227] - Local and/or global tone mapping is performed on screen on the combined Video and Graphics signal. This processing cannot be controlled by the end user.
[0228] - Global tone mapping is performed on-screen on the combined video and graphics signal. This processing depends, among other things, on the brightness and contrast values by the end user.
[0229] Improved photo quality is achieved when: b. Video tone mapping is optimized for the end user's screen. 2. Graphics tone mapping is optimized for the end user's screen. 3. The system allows for Tone Mapping of graphics other than Video Tone Mapping. 4. The system allows different Graphics Tone Mapping for different Graphics components 5. The system allows Video & Graphics Tone Mapping depending on the characteristics of the video.
[0230] Also note that in the case that both an LDR and an HDR of the Video are present on the disk, the additional tone mapping will depend on two sets of parameters for the target screens: one for the LDR version of the video and one for the version HDR of the video.
[0231] In another improved implementation, the Video and/or Graphics Tone Mapping varies over time and depends, for example, on the Video Content in a scene. The content provider may send tone mapping instructions to the player depending on the characteristics of the Video and Graphics content. In another implementation, the player autonomously extracts the Video characteristics from the video signal and adapts the Video & Graphics Tone Mapping depending on these characteristics.
[0232] For example, subtitles can be dimmed for a certain period of time, or a certain gamma shift can be implemented for a certain amount of time (and both can be considered).
[0233] The following is an example of how to provide control commands for Graphics Tone Mapping to a BDROM is described.
[0234] A BDROM graphics stream consists of segments embedded in PES packets that are embedded in a transport stream. Figure 14 illustrates the proper data structure. 5
[0235] Synchronization with the main video is done at the elementary level of the stream using PTS values in the PES packets. The BDROM graphics segment consists of a segment descriptor and segment data. The segment descriptor contains the segment type and length. 10
[0236] The following table shows some types of segments defined in the Blu-ray Disc standard:

[0237] In the existing specification, the values 0x83 to 0xFF are reserved. Thus a new type of segment is defined using, for example, value 0x83 to indicate a segment that contains the segment LHDR_Processing_definition. In general, the LHDR_Processing_definition segment defines the way the graphics decoder processes graphics in case the target screen is different from the end user's screen.
[0238] The following table shows an example of a possible structure of the LHDR_Processing_definition segment:

[0239] In this example, segment 5 LHDR_Processing_definition contains two processing descriptors: Pop-up_Process_descriptor and Subtitle_Process_descriptor. The segment can also contain palettes to be used in case the target screen category is different from the end user screen category. The 10 LHDR palette contains the same number of entries as the original palette, but the entries are optimized for the other screen category.
[0240] The Pop-up_Process_descriptor parameter specifies further processing of the Pop-up 15 graphics in the case that the target screen category is different from the end user screen category.
[0241] As an example this parameter can have the following values. - Pop-up_Process_descriptor=0x00: No further processing. - Pop-up_Process_descriptor=0x01 to 0x03: defined minimum transparency value. - Pop-up_Process_descriptor=0x04: the graphics processor uses palettes defined in the LHDR_Processing_definition segment. - Pop-up_Process_descriptor=0x05: No restrictions on further processing.
[0242] The Subtitle_Process_descriptor parameter specifies additional graphics for Subtitle processing in case the target screen category is different from the end user screen category.
[0243] As an example this parameter can have the following values. - Subtitle_Process_descriptor =0x00: No further processing. - Pop-up_Process_descriptor=0x01 to 0x03: Adapt luma value. - Subtitle_Process_descriptor =0x04: the graphics processor uses palettes defined in the LHDR_Processing_definition segment. - Subtitle_Process_descriptor =0x05: No restriction on further processing.
[0244] Specific example syntaxes for Pop-up_Process_descriptor and Subtitle_Process_descriptor are provided in the following tables:


[0245] Specific examples of differentiated tone mapping depending on the screen characteristics are illustrated in figures 15 and 16. In this example, the original content 5 characterizes the video content and HDR subtitles. The tone mapping for the video is the same as in the example in figure 6.
[0246] Graphics feature blank caption characters with a black border. The original histogram 10 shows one peak in the low luma range and another peak in the high luma range. This histogram for subtitle content is very suitable for an LDR screen as it will result in clear readable text on the screen. However, on an HDR screen these characters would be too bright causing noise, halo and glare. For this reason, the tone mapping for the subtitle graphics will be adapted as described in figure 16.
[0247] In the previous example, the image processing device 103 generated an output image to correspond to a desired maximum brightness, that is, directed for presentation on a screen with a given dynamic range/white point luminance. The output signal can specifically be generated to match a user setting that indicates a desired maximum/white point luminance, or it can simply assume a given dynamic range for the screen 107.
[0248] In some systems the image processing device 103 may comprise a dynamic range processor 203 that is arranged to adapt its processing in dependence on data received from the screen 107 indicating a luminance characteristic of the screen 107.
[0249] An example of this image processing device 103 is illustrated in Figure 17. The image processing device 103 corresponds to that of Figure 1, but in this example the image processing device 103 also comprises a screen receiver 1701 that receives a data signal from the screen 107. The data signal comprises a data field comprising an indication of the dynamic range of the screen for the screen 107. The indication of the dynamic range of the screen comprises at least one luminance specification indicative of a property of screen luminance. Specifically the luminance specification can include a specification of a maximum brightness, that is, a white point/maximum luminance for the screen. Specifically, the display's dynamic range indication can define whether the screen is an HDR screen or LDR screen and can in particular indicate the maximum light output in nits. Thus, the display's dynamic range indication can define whether the screen is a screen of 500 nits, 1000 nits, 2000 nits, 4000 nits etc.
[0250] The screen receiver 1701 of the image processing device 103 is coupled to the dynamic range processor 203 which is inserted into the dynamic range indication of the screen. The dynamic range processor 203 can indeed generate an output signal that directly corresponds to the specific screen yet to generate the output signal for an assumed or manually defined white point luminance.
[0251] The dynamic range processor 203 can indeed adapt the dynamic range transform in response to the received screen dynamic range indication. For example, the received encoded image can be an LDR image and it can be assumed that this image has been optimized for a 500 nit screen. If the display's dynamic range indication indicates that the screen is still a 500 nits screen, the image processing device 103 can use the encoded image directly. However, if the display's dynamic range indication indicates that the display is a 1000 nits display, a first dynamic transform can be applied. If the display's dynamic range indication indicates that display 107 is a 2000 nit display, a different transform can be applied, etc. Similarly, if the received image is an optimized 2000 nit image, the image processing device 103 can use that image directly if the display's dynamic range indication indicates that the screen is a 2000 nit screen. However, if the display's dynamic range indication indicates that the screen is a 1000 nits screen or a 500 nits screen, the image processing device 103 is able to perform the appropriate dynamic range transform to reduce the dynamic range.
[0252] For example, with reference to figure 18, two different transformations can be defined for a 1000 nits screen and a 4000 nits screen respectively, and with a third one-to-one mapping being defined for a 500 nits screen. In figure 1, the mapping for a 500 nits screen is indicated by the 1801 curve, the mapping for the 1000 nits screen is indicated by the 1803 curve, the mapping for a 4000 nits screen is indicated by the 1805 curve. , the encoded received image is assumed to be a 500 nit image and this is automatically converted to an image suitable for the specific screen. Thus, the image processing device 103 can automatically adapt and generate an optimized image for the specific screen to which it is connected. In particular, the image processing device 103 can automatically adapt to whether the screen is an HDR screen or an LDR screen, and can further adapt to the white luminance of the specific screen.
[0253] It will be noted that inverse mappings can be used when mapping a higher dynamic range into a lower dynamic range.
[0254] If the screen has a white luminance corresponding to one of the three curves in Figure 18, the corresponding mapping can be applied to the encoded image. If the screen has a different luminance value, a combination of transformations can be used...
[0255] Thus, the dynamic range processor 203 can select an appropriate dynamic range transform depending on the display's dynamic range indication. As an example of low complexity, dynamic range processor 203 can select between using the curves depending on how closely the associated white point luminance matches the white point luminance indicated by the display's dynamic range indication. Specifically, the mapping that is associated with a white point luminance closest to the desired white point luminance indicated in the display's dynamic range indication can be selected. Thus, if an output LDR image is generated, dynamic range transform can be performed using curve 1801. If an HDR image of the relatively low white point luminance is generated, curve mapping 1803 is used. However, if the white point high luminance HDR image is generated, the 1805 curve is used.
[0256] If an image is to be generated for a white luminance between dynamic range transforms for the two HDR settings (eg for a 2000 nit screen), both mappings 1803, 1805 can be used. In particular, an interpolation between the transformed images for the two mappings can be performed. Such interpolation can be linear or non-linear. The weighting of the individual transformed images can typically depend on how close they are to the desired maximum brightness emitted.
[0257] For example, a first transformed image can be performed by applying a first mapping 1803 to the encoded image (the LDR image) and a second transformed image can be performed by applying a second mapping to the encoded image. The first and second transformed images are then combined (eg, summed) to generate the output image. The weights of the first and second transformed images respectively are determined by how closely the white luminance associated with the different mappings corresponds to the screen white luminance indicated in the screen dynamic range indication.
[0258] For example, for a 1500 nits screen, the first transformed image can be weighted much higher than the second transformed image and for a 3500 nits screen the second transformed image can be weighted significantly higher than the first transformed image.
[0259] In some embodiments, the dynamic range processor (203) may be arranged to select between generating the output image as the encoded received image and generating the output image as a transformed image of the encoded received image in response to the indication of the dynamic range of the screen.
[0260] Specifically, if the white point luminance indicated by the display dynamic range indication is sufficiently close to the indicated or assumed white point luminance for the received image, the dynamic range transform may simply consist of no to perform any processing on the received image, ie the input image can simply be used as the output image. However, if the white point luminance indicated by the display dynamic range indication is different from the assumed or indicated white point luminance for the received image, the dynamic range transform can modify the encoded received image according to a suitable image mapping of the input pixels from the output pixel image. In these cases, the mapping can be adapted depending on the received indication of the white point luminance from the end user's screen. In another example, one or more predetermined mappings can be used.
[0261] For example, the image processing device 103 may include a first predetermined mapping that has been determined to provide an output image suitable for a white point doubling and a second predetermined mapping that has been determined to provide an image of output suitable for splitting at white level point. In this example, the image processing device 103 can select between the first mapping, the second mapping, and a unit mapping dependent on the white point luminance of the received image (e.g., as indicated by the reference target screen) and the luminance white point for the end-user screen as indicated by the display's dynamic range indication. The image processing device 103 can specifically select the mapping that most closely matches the index between the white point luminances of the input image and the end-user screen.
[0262] For example, if an input image is received with a reference target screen indicating that it has been optimized for a 1000 nits screen and the end user screen is a 2000 nits screen, the image processing device 103 will select the first mapping. If instead, the display's dynamic range indication indicates that an end-user screen is a 1000 nits screen, the image processing device 103 will select the unit mapping (ie, use the input image directly). If the dynamic range indication indicates that an end-user screen is a 500 nit screen, the image processing device 103 will select the second mapping.
[0263] If among the values for the white point luminance of the end user's screen are received, the image processing device 103 can select the mapping closest to the index between the white point luminances, or it can, for example, interpolate between the mappings.
[0264] In the example of Figure 2, the image processing device 103 is arranged to perform a dynamic range transform based on a target reference screen received from the content provider apparatus 101, but without any specific information or knowledge of the screen specific 107 (that is, it can simply generate the output image to be optimized for a given dynamic range/white point, but without explicitly knowing whether the connected screen 107 has this value). Thus, an assumed or reference white point luminance can be used. In the example of Fig. 17, the image processing device 103 is able to perform a dynamic range transform based on an indication of the dynamic range of the screen received from the screen 107, but without any specific information or knowledge of the specific dynamic range and luminance of white point that the received encoded image was generated for (that is, it can simply generate the output image based on the given dynamic range/white point luminance for the received encoded image, but without explicitly knowing whether the image was actually generated for that range and luminance). Thus, an assumed or reference white point luminance for the encoded image can be used. However, it will be appreciated that in many implementations the image processing device 103 may be arranged to perform dynamic range transform in response to information received from the content provider side and the end-user screen. Fig. 19 shows an example of an image processing device 103 comprising a dynamic range processor 203 arranged to perform a dynamic range transform in response to the reference target screen and the dynamic range indication of the screen. It will also be noted that the comments and descriptions provided for the independent approaches of Figure 2 and 17 apply equally (mutatis mutandis) to the system of Figure 19.
[0265] The approaches can be particularly advantageous in non-homogeneous content distribution systems, for example, what is progressively perceived for future television systems. In fact, the (maximum) clarity of screens is currently increasing and in the future, screens with a wide range of (maximum) clarity are expected to coexist in the market. Since screen brightness (and typically the electro-optical transfer function that specifies how a screen converts input pixel trigger values (color) into light values that then provide a particular psychovisual impression to the viewer) is no longer known on the content generation side (and which is still generally different from the reference monitor on which the content was targeted/rated), it becomes challenging to provide the best/optimal picture quality on screen.
[0266] Thus, in the system in figure 1, the screen 107 (or dissipator device) can send information about its lightening capabilities (maximum brightness, gray transfer function (color), or other gray interpretation properties about its HDR range , such as a particular electro-optical transfer function etc.) back to the image processing device 103.
[0267] In the specific example the image processing device 103 is a BDROM player connected to a screen by means of an HDMI interface, and thus the indication of the dynamic range of the screen can be communicated from the screen to the image processing device 103 through from an HDMI interface. Thus, the display's dynamic range indication can specifically be communicated as part of the EDID information that can be signaled over the display's HDMI 107 to the image processing device 103. However, it will be noted that the approach can be applied to many other devices. video/graphics generators such as DVB receivers, ATSC receivers, personal computers, tablets, smartphones and game consoles etc. It will also be noted that many other wired and wireless interfaces can be used such as Display Port, USB, Ethernet and WI-FI etc.
[0268] The image processing device 103 can then select, for example, one of different versions of the content/signal depending, for example, on the brightness of the screen. For example, if the signal from the content provider apparatus 101 comprises both an LDR image and an HDR image, the image processing device 103 may select among these based on whether the dynamic range indication of the screen is indicative of the screen being a screen. LDR or an HDR screen. As another example, the image processing device 103 can interpolate/mix different versions of content brightness to derive a new signal that is approximately ideal for screen brightness. As another example, you can adapt the mapping of the encoded image to the output image.
[0269] It will be noted that in different implementations different parameters and information can be provided in the display's dynamic range indication. In particular, the comments and descriptions previously provided for a reference target screen can equally apply to the indication of the screen's dynamic range. Thus, the parameters and information communicated from the screen 107 to the image processing device 103 may be as described for communicating information on the target screen from the content provider apparatus 101 to the image processing device 103.
[0270] Specifically, the screen can communicate a maximum luminance/white point luminance for a screen and this can be used by the dynamic range processor 203 to adapt the output signal as previously described.
[0271] In some embodiments, the display dynamic range indication may additionally or alternatively include a black point luminance for a screen 107. The black point luminance may typically indicate a luminance corresponding to trigger values corresponding to the value of darker pixel. The intrinsic black point luminance for a screen can for some screens correspond to virtually no light emitted. However, for many screens the darker setting of, for example, the LCD elements still results in some light emitted from the screen resulting in black image areas being perceived lighter and grayer on hands than deep black. For these screens, the black point luminance information can be used by the dynamic range processor 203 to perform a tone mapping where, for example, all black levels below the screen's black point luminance will be converted to the pixel value. deeper dark (or, for example, using a more gradual transition). In some scenarios the black point luminance may include a contribution from ambient light. For example, black point luminance can reflect the amount of light being reflected off the screen.
[0272] In addition, display dynamic range indication for many displays can include more information characterizing the display's OETF. Specifically, as previously mentioned, the screen may include white point luminance and/or black point luminance. In many systems, the display's dynamic range indication can also include more details about the display's OETF in the intervening light emitted. Specifically, the display's dynamic range indication can include an OETF range for a display.
[0273] The dynamic range processor 203 can then use the information from that OETF to adapt the specific dynamic range transform to provide the desired performance and in particular, the conversion to an HDR image can reflect not only that a lighter emitted light is possible, but you can also consider exactly how the relationship between trigger values should be generated to provide the desired emitted light in the high brightness range. Similarly, the conversion to an LDR image can reflect not only that less bright emitted light is available, but it can also consider exactly how the relationship between trigger values must be generated to provide the desired emitted light in the reduced brightness range.
[0274] The display's dynamic range indication can thus specifically provide information that informs the dynamic range processor 203 of how it should map the input values corresponding to one dynamic range to output values corresponding to another and typically larger dynamic range. Dynamic range processor 203 can account for and can, for example, compensate for any variations or non-linearities in the interpretation by screen 107.
[0275] It will be noted that many different dynamic range transforms are possible and that many different ways to adapt these dynamic range transforms based on the display's dynamic range indication can be used. In fact, it will be noted that most of the comments provided for the dynamic range transform based on the reference target screen of the content provider apparatus 101 are equally appropriate (mutatis mutandis) to the dynamic range transform based on information of luminance characteristics from the end user's screen.
[0276] As an example of low complexity, the dynamic range transform can simply apply a linear function in pieces to the input values of an LDR image to generate enhanced HDR values (or to the input values of an HDR image to generate values improved LDR). In fact, in many scenarios, a simple mapping consisting of two linear relationships as illustrated in Figure 20 can be used. The mapping shows a direct mapping between input pixel values and output pixel values (or in some scenarios the mapping may reflect a (possibly continuous) mapping between input pixel luminances and output luminance pixels).
[0277] Specifically, the approach provides a dynamic range transform that keeps the dark areas of an image to remain dark while at the same time allowing the substantially high dynamic range to provide a much clearer interpretation of the bright areas as well as still an improved and sportier looking middle range. However, the exact transformation depends on the screen on which it is to be interpreted. For example, when interpreting an image for a 500 nit screen on a 1000 nit screen, a relatively modest transformation is required and the elongation of the bright areas is relatively limited. However, if the same image is to be displayed on a 5000 nit screen, a much more extreme transformation is needed to fully exploit the available light without lightening the dark areas too much. Figure 20 illustrates how two different mappings can be used for respectively a screen of 1000 nits (curve 2001, maximum value of 255 corresponding to 1000 nits) and a screen of 5000 nits (curve 2003 maximum value of 255 corresponding to 5000 nits) for an input image of 500 nits LDR (maximum value of 255 corresponding to 500 nits). The image processing device 103 can further determine suitable values for another maximum luminance by interpolating between the values provided. In some implementations, more points can be used to define a curve that is still linear in parts, but with more linear intervals.
[0278] It will be noted that the same mappings can be used when mapping an input HDR image to an output LDR image.
[0279] In some embodiments, the dynamic range transform may comprise or consist of a gamma transform which may be dependent on the received indication of the dynamic range of the screen. Thus, in some embodiments, the dynamic range processor 203 can modify the chromaticities of the interpreted image depending on the display's dynamic range indication. For example, when a received HDR image is interpreted on an LDR screen, compression can result in a smoother image with few variations and gradations in individual image objects. Dynamic range transform can compensate for these reductions by increasing chroma variations. For example, when an image with strong light is optimized for rendering on an HDR screen, rendering on an LDR screen with reduced dynamic range will typically make an apple appear to consider and appear less clear and more opaque. This can by the dynamic range transform be compensated by making the apple color more saturated. As another example, texture variations can become less perceptually significant due to reduced luminance variations and this can be compensated for by increasing texture chroma variations.
[0280] The display's dynamic range indication can in some examples or scenarios provide generic information for a screen, such as the standard manufacturing parameters, the standard EOTF etc. In some examples and scenarios, the display's dynamic range indication may still reflect the specific processing performed on the screen and may specifically reflect user adjustments. Thus, in this example, the display's dynamic range indication not merely provides fixed and unchanging information that depends only on the display, but still provides a time-varying function that can reflect specific display operation.
[0281] For example, the screen can operate in different image modes with different interpretation characteristics. For example, in a “live” screen mode, the screen may interpret images with light areas lighter than normal, in a “mute” screen mode it may interpret images with areas darker than normal, etc. Information on the current mode, for example the specific range for that mode, can be reported to the image processing device 103 as part of the display's dynamic range indication thus allowing the image processing device 103 to adapt the range transform. dynamics to reflect the characteristics of the interpretation. The image processing device 103 can, for example, screen fit by compensating for this or can optimize the transform to maintain the specific fit.
[0282] The display's dynamic range indication may also reflect other processing settings for a display. For example, clipping levels, backlight energy adjustments, color scheme mappings, etc., can be communicated to the image processing device 103 where they can be used by the dynamic range processor 203 to adapt the transform. dynamic range.
[0283] Figure 21 illustrates an example of screen elements 107 where the screen provides an indication of the screen's dynamic range to the image processing device 103.
[0284] In the example, the screen comprises a receiver 2101 that receives the image signal emitted from the image processing device 103. The received image signal is coupled to a driver 2103 which is further coupled to a display panel 2105 that interprets the image. The display panel can, for example, be an LCD or plasma display panel as will be known to the person skilled in the art.
[0285] Trigger 2103 is arranged to trigger display panel 2105 to interpret the encoded image. In some embodiments, the driver 2103 is able to perform advanced and adaptive signal processing algorithms including tone mapping, color classification etc. In other embodiments, trigger 2103 may have relatively low complexity and may, for example, merely perform a pattern mapping of the signal from input values to trigger values to the pixel elements of display panel 2105.
[0286] In the system, the screen 107 further comprises a transmitter 2107 which is arranged to transmit a data signal to the image processing device 103. The data signal may, for example, for an HDMI connection to be communicated on one channel DDC using the E-EDID structure as will be described later.
[0287] The 2107 transmitter generates the data signal to include the display dynamic range indication for a display (107). Thus, specifically the 2107 transmitter which indicates, for example, the white point luminance and optionally the EOTF of the screen. For example, a data value providing an index between a number of predetermined white point luminances or EOTFs can be generated and transmitted.
[0288] In some low complexity embodiments, for example, the white point luminance may be a fixed value stored in the 2107 transmitter that merely communicates that default value. At more complex values, the display's dynamic range indication can be determined to dynamically reflect varying and/or adapted values. For example, the 2103 trigger can be arranged to operate in different screen modes, and the display's dynamic range indication can be adapted correctly. As another example, the user's adjustment of, for example, a brightness level for a screen can be reflected by the display's dynamic range indication generated and transmitted by the 2107 transmitter...
[0289] As previously mentioned, the display's dynamic range indication can comprise a measurement of ambient light and the dynamic range processor can be arranged to adapt the dynamic range transform in response to the ambient light measurement. Ambient light measurement can be provided as separate, explicit data or it can be reflected in other parameters. For example, ambient light measurement can be reflected in black point luminance which can include a corresponding contribution to screen light reflections.
[0290] In many scenarios, the screen may include a light detector positioned in front of the screen. This light detector can detect the overall ambient light level or can specifically measure the light that hits a data screen directly to be reflected back to the viewer. Based on this light detection, the screen can thus generate an ambient light indication that reflects, for example, the ambient light level of the viewing environment in general or, for example, that specifically reflects an estimate of the light reflected from the screen. The screen 107 can report this value to the image processing device 103, either as an individual value or, for example, by calculating the effective black luminance level to reflect the amount of light reflections.
[0291] The dynamic range processor 203 can then adapt the dynamic range transform of course. For example, when the ambient light level is high, more use of the additional brightness levels of an HDR screen can be used more aggressively to generate a good looking image with a high contrast. For example, average emitted light can be set relatively high and more midrange luminances can be pushed towards the HDR range. Bright areas can be interpreted using the entire HDR range and darker areas would typically be interpreted at relatively high light levels. However, the high dynamic range of an HDR image allows for such a relatively clear image to still display large luminance variations and thus still have high contrast.
[0292] Thus, the HDR capabilities of the screen are used to generate an image that provides images that are perceived to be clear and have high contrast even when viewed, for example, in bright daylight. Such an image would typically not be appropriate in a dark room as it would be dominant and would appear too far away. Thus, in a dark environment, the dynamic range transform would perform a much more conservative LDR to HDR transform which, for example, would keep the same light emitted by LDR for middle and dark range values and only increase the brightness for the lighter areas .
[0293] The approach can allow the image processing device 103 to automatically adapt the Dynamic Range LDR to HDR Transform (or, for example, a Dynamic Range HDR to LDR Transform) to match the specific viewing environment of the screen. This is even possible without requiring the image processing device 103 to make any measurements of or actually any measurements of or actually to be positioned in or near this environment.
[0294] Ambient light indication can typically be optional and thus the image processing device 103 can use it if available and otherwise only perform a standard dynamic range transform for specific characteristics (eg OETF from screen).
[0295] The optional extension information provided by the screen about its viewing environment (especially the surrounding light) is then used by the image processing device 103 to perform the more complicated image/video optimization transforms to present ideal image/video ideal on-screen where optimization can include not only features of the screen, but also of the viewing environment.
[0296] So, other optimizations can be performed when information is provided by the screen about the viewing environment. The screen will typically periodically measure the surrounding light and send information (eg, brightness and color in the form of three parameters: XYZ) about it to the image processing device 103. This information may typically not be provided as part of the EDID or EDID data. any other data type primarily used for one-time communication of information. Furthermore, it can be communicated, for example, on a separate channel, such as using HDMI-CEC. Such periodic measurement and update can, for example, result if the user, for example, turns off the light in close proximity to the screen, the image processing device 103 can automatically adapt the processing to provide images more suitable for the darker viewing situation. , for example, applying different color/luminance mappings.
[0297] An example of a set of relevant parameters that can be reported by the end-user display in the display's dynamic range indication includes: • The absolute maximum luminance (white point luminance) of the end-user display. • End user screen gamma - factory setting.
[0298] The end user's absolute maximum screen luminance can, for example, be set to typical screen settings, default factory settings, or settings that produce the highest brightness.
[0299] Another example of a set of relevant parameters that can be reported by the end-user display in the display's dynamic range indication includes: • Maximum end-user display luminance for current brightness, contrast, etc. settings. • End user screen gamma - current settings.
[0300] The first set of parameters is time independent where the second set varies over time as it depends on user settings. The application of either set has consequences for system behavior and user experience, and it will be noted that the set of specific parameters used in a particular system depends on the system's references and requirements. In fact, parameters can be mixed between the two sets, and, for example, default factory settings can be provided on power-up, with parameters dependent on the user setting being reported periodically after them.
[0301] It is also noted that specific parameter sets can characterize an EOTF for a screen that has either the factory default EOTF or the specific current user-dependent EOTF setting. Then, the parameters can provide information in mapping between trigger values and a luminance emitted from the screen that allows the image processing device 103 to generate the trigger values that will result in the desired output image. It will be noted that in other implementations other parameters can be used to characterize part or all of the mapping between trigger and light emitted values for a screen.
[0302] It will be noted that many different approaches can be used to communicate the display screen dynamic range indication to the image processing device 103.
[0303] For example, for screen parameters that are independent of user settings and do not vary over time, communication can for an HDMI connection can be effectively transferred over a DDC channel using the E-EDID structure.
[0304] In a low complexity approach, a set of categories can be defined for the end user screens with each category defining the relevant parameter ranges. In this approach only a category identification code for an end-user screen needs to be transmitted.
[0305] A specific example of a communication indicating the dynamic range of screen data in an E-EDID format will be described.
[0306] In the specific example, the first 128 bytes of the E-EDID will comprise a 1.3 EDID structure (EDID base block).
[0307] For the indication of the dynamic range of the screen parameters, a new Screen Descriptor Block in the E-EDID data structure can be defined. As current devices are ignorant of this new Screen Descriptor Block, they will merely ignore then providing backwards compatibility. A possible format of this “Luminance Behavior” descriptor is listed in the table below.

[0308] value between 0 and 255 Indicates that this 18 00h byte descriptor is a screen descriptor 00h Reserved screen descriptor identification number indicating that F6h this is a Luminance descriptor. 00h Reserved Peak_Luminance transfer curve (optional; eg alpha, beta, relevance) Peak_Luminance is a parameter with a that indicates the maximum display luminance according to: maximum display luminance (cd/m2 ) = 50 x Peak_Luminance,
[0309] then covering a range from 0 to 255*50=12750 cd/m2
[0310] or 255*100
[0311] The transfer curve can be a gamma curve (as in ITU601, ITU709, etc.), but allowing for a much higher gamma (up to 10). or a different transfer (or log) curve parameter may in some scenarios be more appropriate. For example, instead of the gamma function:
a power function:
could be used where the parameters α,β and Δ can be defined to provide the desired characterization.
[0312] The additional information can then be used by the image processing device 103 to make more advanced decisions to determine different gray levels of videos and graphics (or component of multiple images), such as global processing such as modifications range-based. With more information, such as how the screen will gamma-remap all gray values, the Dynamic Range Processor 203 can make much smarter decisions for the final appearance of the video and secondary images (and how they might overlap in luminance depending on yet, for example, other geometric properties such as how big the subregions are etc.).
[0313] In the previous examples, the screen 107 provides an indication of the screen's dynamic range that informs the image processing device 103 of how the screen will display an incoming display signal. Specifically, the display's dynamic range indication can indicate the mapping between trigger values and emitted light that is applied by the display. So, in this example the dynamic range indication of the screen informs the image processing device 103 of the available dynamic range and how this is presented, and the image processing device 103 is free to adapt the dynamic range transform as can be seen. .
[0314] However, in some systems the screen may also be able to exert some controls over the dynamic range transform performed by the image processing device 103. Specifically, the display's dynamic range indication may comprise the control data transform of the dynamic range, and the dynamic range processor 203 may be arranged to perform the dynamic range transform in response to that dynamic range control data transform.
[0315] The control data can, for example, define an operation or dynamic range transform parameter that should be applied, can be applied, or that is recommended to be applied. Control data can be further differentiated for different characteristics of the image to be encoded. For example, individual control data can be provided for a plurality of possible starting images, such as one set for a 500 nit LDR image, another for a 1000 nit encoded image etc.
[0316] As an example, the screen can specify that tone mapping should be performed by dynamic range processor 203 depending on the dynamic range of the received image. For example, for a 2000 nit screen, the control data can specify one mapping that should be used when mapping a 500 nit LDR image, and another mapping that should be used when mapping 1000 nit image, etc.
[0317] In some scenarios, the control data may specify boundaries between mappings with the mappings being predetermined within each range (eg, standardized or known on both the content provider side and the interpretation side). In some scenarios, the control data may further define the elements of different mappings or may specify the mappings precisely, for example using a gamma value or specifying a specific transform function.
[0318] In some embodiments, the dynamic range control data transform can directly and explicitly specify the dynamic range transform that must be performed to transform the received image into an image with a dynamic range corresponding to the dynamic range of the screen. For example, the control data can specify a direct mapping of input image values to output image values for a range of incoming image white dots. The mapping can be provided as a simple parameter allowing the appropriate transform to be performed by the dynamic range processor 203 or detailed data can be provided as a specific visualization table or mathematical function.
[0319] As an example of low complexity, the dynamic range transform can simply apply the linear function in pieces to the input values of an LDR image to generate the enhanced HDR values (or to the input values of an HDR image to generate improved LDR values). In fact, in many scenarios, a simple mapping consisting of two linear relationships as illustrated in Figure 20 can be used.
[0320] Specifically, as previously described, such an approach can provide a dynamic range transform that keeps the dark areas of an image to remain dark while at the same time allowing the substantially high dynamic range to be used to provide an interpretation of very bright areas. clearer, as well as an even more vivid and improved middle range of appearance. However, the exact transformation depends on the dynamic range of the received image as well as the dynamic range of the final target screen. In some systems, the screen may then specify a tone mapping to be performed by the image processing device 103 simply by communicating the coordinates of the knee of the function (i.e., the intersection between the linear elements of the mapping).
[0321] An advantage of this simple relationship is that the desired tone mapping can be communicated at very low cost. In fact, a data value with simply two components can specify the desired tone mapping to be performed by the image processing device 103 to different screens. Different coordinates of the "knee" point can be communicated for different input images and the image processing device 103 can determine suitable values for other input images by interpolating between the provided values.
[0322] It will be noted that most of the comments provided regarding the provision of the dynamic range control data transform of the content provider apparatus 101 applying equally well (mutatis mutandis) to the dynamic range control data transform received from the screen 107 .
[0323] So, in some scenarios the screen 107 may be in control of the dynamic range transform performed by the image processing device 103. An advantage of this approach is that it may, for example, allow a user to control the desired interpreted image by controlling the screen and without any requirement to provide user inputs or adjustments to the image processing device 103. This can be particularly advantageous in scenarios where a plurality of image processing devices are used with the same screen, and in particular can aid provide homogeneity between images from different image processing devices.
[0324] In many implementations, screen 107 control data may not specify a specific tone mapping that must be performed, but still providing data that define boundaries within this dynamic range transform/tone mapping can be freely adapted by the device of image processing 103.
[0325] For example, rather than specifying a specific transition point for the curve of figure 20, the control data can define limits for the transition point (with possibly different limits being provided for different levels of maximum brightness). Then, the image processing device 103 can individually determine the desired parameters for the dynamic range transform so that this can be set to provide the preferred transition for a specific screen considering, for example, the user's specific preferences. However, at the same time the screen can restrict this freedom to an acceptable level.
[0326] Then, the dynamic range control data transform can include data that defines the transform parameters that are to be applied by the dynamic range transform performed by the dynamic range processor 203 and/or that defines limits for the transform parameters . The control data can provide this information for a range of dynamic ranges of the input image then allowing the adaptation of the dynamic range transform on different received images. In addition, for input images with dynamic ranges not explicitly included in the control data, appropriate data values can be generated from the available data values, for example, by interpolation. For example, if a knee point between two linear pieces is indicated for an input image of 500 nits and a 2000 nits input image, a suitable value for an input image of 1000 nits can be found by simple interpolation (eg by a simple variation on the specific example).
[0327] It will be noted that many different and varied approaches to dynamic range transform and how to constrain, adapt and control the display side by additional control data can be used in different systems depending on the specific preferences and requirements of the individual application.
[0328] In some scenarios, the control data may merely provide a suitable mapping suggestion that can be applied, for example, in the midrange area. In this case, the screen manufacturer can certainly assist the image processing device 103 by providing the suggested dynamic range transform parameters that have been found (for example, through manual optimization by the screen manufacturer) to provide a high image quality when visualized on the specific screen. The image processing device 103 can advantageously utilize this, but is free to modify the mapping, for example, to accommodate individual user preferences.
[0329] In many scenarios the mapping that is at least partially performed in this control data base will represent a functional relationship of relatively low complexity, such as a gamma mapping, S curve, combined mapping defined by partial specifications for individual ranges, etc. . However, in some scenarios more complex mappings can certainly be used.
[0330] As mentioned, control data can provide mandatory or voluntary control data. In fact, the received data may include one or more fields that indicate whether the provided tone mapping parameters are mandatory, allowed or suggested.
[0331] On some systems, the screen may operate in different dynamic ranges. For example, a very bright HDR screen with a white point luminance of say 5000 nits might also operate in a display mode with a white point luminance of 4000 nits, another one with 3000 nits, one with 2000 nits, another with 1000 nits and finally it can operate in an LDR mode having a white luminance of only 500 nits.
[0332] In this scenario, the screen data signal can indicate a plurality of dynamic luminance ranges. Then, each of these different luminance dynamic ranges can correspond to a dynamic range mode for a screen. In this arrangement, the dynamic range processor 203 can select one of the luminance dynamic ranges and proceed to perform the dynamic range transform in response to the dynamic range of the selected screen. For example, the dynamic range processor 203 can select the dynamic range of 2000 nits and then proceed to perform the dynamic range transform to optimize the generated image for that white point luminance.
[0333] The selection of a suitable luminance dynamic range for a screen can be dependent on different aspects. In some systems, the image processing device 103 may be arranged to select a suitable screen dynamic range based on the type of image. For example, each band may be associated with a given type of image, and the image processing device 103 may select the type of image that corresponds to the received image, and then proceed to utilize the dynamic range associated with that type of image.
[0334] For example, a number of image types can be defined corresponding to different types of content. For example, one type of image can be associated with drawings, another with a football match, another with a newspaper, another with a movie, etc. The image processing device 103 can then determine the appropriate type for the received image (for example, based on explicit metadata or a content analysis) and proceed to apply the corresponding dynamic range. This can, for example, result in drawings being presented very vividly and with high contrast and high clarity, while at the same time allowing, for example, dark films not to be interpreted abnormally.
[0335] The system can then adapt to the specific signals being interpreted. For example, a poorly made consumer video, a lighted football game, a well-lit news program (eg low contrast scenes), etc., may be displayed differently and specifically the dynamic range of the interpreted image may be tailored to specifically suited to the specific image.
[0336] It was previously mentioned that the screen can provide control data to the image processing device 103. However, in some systems it may additionally or alternatively be the image processing device 103 that provides control data to the screen 107.
[0337] Then, as illustrated in Fig. 22, the image processing device 103 can comprise a controller 2201 that can output a display control data signal to the screen 107.
[0338] A display control signal can specifically instruct the screen to operate in the specific dynamic range mode that has been selected by the image processing device 103 for the specific image. So, as a result, a poorly lit amateur image will be rendered with a low dynamic range thus avoiding the introduction of unacceptable errors due to the transformation into a high dynamic range that is not actually present in the original image. At the same time, the system can automatically adapt so that high quality images can effectively be transformed into high dynamic range images and presented as such. As a specific example, for an amateur video sequence, the image processing device 103 and screen can automatically adapt the present video with a dynamic range of 1000 nits. However, for a high quality professionally captured image, the image processing device 103 and the screen 107 can automatically adapt to present the video using the full dynamic range of 5000 nits that the screen 107 is capable of.
[0339] The display control signal can then be generated to include commands such as “use 1000 nits dynamic range”, “use LDR range”, “use maximum dynamic range” etc.
[0340] Screen control data can be used to provide a number of commands in the forward direction (from image processing device 103 to screen). For example, the control data may include image processing instructions for the screen, and specifically may include indicating tone-to-screen mappings.
[0341] For example, the control data may specify a brightness adjustment, cropping adjustment, or contrast adjustment that should be applied by screen 107. The image processing instruction may then define a mandatory, voluntary, or suggested operation that must be performed by screen 107 on the received display signal. This control data can then allow the image processing device 103 to control some processing being performed by the screen 107.
[0342] Control data can, for example, specify that a specific filter should or should not be applied. As another example, control data can specify how backlight operations are to be performed. For example, the screen may operate in a low energy mode that uses aggressive local dimming of a backlight or it may be able to operate in a high energy mode where local dimming is only used when it can improve the interpretation of dark areas . Control data can be used to switch the screen between these operating modes...
[0343] The control data may in some instances specify a specific tone mapping that must be performed by the screen, or it may further specify that the tone mapping function must be turned off (thus allowing the image processing device 103 to control completely all the tone mapping).
[0344] It will be noted that in some embodiments, the system may use control data in both directions, that is, either in a forward direction of the image processing device 103 on screen 107 or in a backward direction of screen 107 to the image processing device 103. In these cases, it may be necessary to introduce operating conditions and rules that resolve the potential conflicts. For example, it can be arranged that the image processing device 103 is the master device that controls the screen 107 and dominates the screen 107 in case of conflicts. As another example, control data can be restricted to specific parameters in both directions so that conflicts do not occur.
[0345] As another example, the master and slave relationships can be user adjustable. For example, an image processing device 103 and a screen 107 may be arranged to provide control data to another entity, and may specifically operate as the master device. The user can in these systems designate one of the devices to be the master device with the other one becoming a slave device. The user can specifically select this based on a preference for him to control the image processing device 103 or screen 107 system.
[0346] The system described above can then allow communication between the content provider and the image processing device and/or communication between the image processing device and the screen. These approaches could be applied in many systems that feature a communication channel between a content provider and an image processing device and/or between an image processing device and a screen. Examples include BDROM, ATSC and DVB, or internet, etc.
[0347] The system can utilize a communication channel between an image processing device and a screen such as an HDMI or screen port communication interface. This communication can be in two directions. For example, if a smart screen is doing all the video and graphics mapping optimally, the image processing device can, for example, read the control parameters and reformat and transmit these in a similar HDMI structure.
[0348] The approach can particularly be applied to a BDROM system. How this approach can increase the BDROM specifications to allow the transmission of parameters and control commands from the target screen. Using this data, in combination with the end user's screen parameters, can allow the BDROM player, for example: • perform additional video and/or graphics tone mapping or other processing in the player depending on the characteristics of the target screen and from the end user's screen. • perform additional video and/or graphics tone mapping or other command-driven processing on the data stream provided by the content provider.
[0349] In some embodiments, the image processing device 103 may also comprise a transmitter for transmitting dynamic range control data to the content provider apparatus 101. Then, the image processing device 103 may control or at least influence the processing or operation performed in the content provider apparatus 101.
[0350] As a specific example, the control data may include an indication of a preferred dynamic range for the image, and may specifically include an indication of a dynamic range (eg white point luminance and optionally EOTF or gamma function ) to an end-user screen.
[0351] In some embodiments, the content provider apparatus 101 may be arranged to consider the preferred dynamic range indication when performing a tone mapping. However, in other embodiments, the content provider apparatus 101 may provide a number of predetermined tone mappings, for example, involving a manual tone mapping by a tone mapping expert. For example, a tone-mapped image can be generated for a 500 nit screen, for a 1000 nit screen, and for a 2000 nit screen.
[0352] In this scenario, the content provider apparatus 101 can be arranged to select this image to transmit to the image processing device 103 based on the received control data. Specifically, the image which is closest to the dynamic range indicated by the control data can be selected and transmitted to the image processing device 103.
[0353] Such an approach may be particularly suitable for a broadcast application where the transferred signal can be dynamically updated as far as possible to match the dynamic range of the end user's screen.
[0354] The approach can reduce the degree of dynamic range transform that must be applied in the image processing device 103 and can specifically for scenarios where the content provider apparatus 101 can provide a tone-mapped image to the same dynamic range as the end-user screen allows dynamic range transform to be a simple null operation (i.e., it can allow the received image to be used directly by the image processing device 103.
[0355] There are several application scenarios in the present embodiments that may be useful. For example, encoding a particular white point, or directed white, or similar value with the pixel image content (for example, a DCT encoding of the local object textures), allows for intelligent allocation of the necessary code levels versus directed output luminances for various possible output signals. One can, for example, encode the texture of a dark room as if it were well lit (ie up to 255 pixel lumens, rather than having a maximum lumen of eg 40 in the dark scene image), but specify that the “white”, that is, the 255 has to be treated in a particular way, that is, that it has to be interpreted as dark. A simple way to use this is to co-code, for example, to interpret the output luminance on the screen, for this luma code 255. The same can be done to predominantly encode very clear values, as, for example, in a foggy scene with strong lights in it.
[0356] We have explained another example by means of figure 23, namely, the principle of encoding any HDR scene (approximately) into an LDR image (“HDR_encoded_as_LDR”), which could, for example, be a 10-bit image pattern , but we will explain the varying interest of encoding a classical 8-bit image, ie an image that is compatible with, for example, an MPEG2 or AVC standard, and could then be directly used by a classical rendering technology. Although one might want many bits for an HDR signal, for example 12, 16 or 22, 8 bits for the luma channel already carry a lot of information (many possible colors, especially to approximate complex textures) for any maximum white of an interpretation . Also, many HDR signals can allow a significant degree of approximation, since, for example, the sun does not need to be encoded exactly as bright as it really is, as it will be approximated when interpreted on a screen anyway. For LDR luminance ranges, even a smaller amount of bits will usually be reasonably sufficient, as, for example, 6 bits gives a reasonable approximation/quality of an image (as known from print).
[0357] In the example we thus encode an HDR image exactly inside an 8-bit luma structure, applying the appropriate mappings, that is, mathematical transformations at least in the pixel lumas, which are typically simple. The criteria are those that on the one hand (by encoding the transformations) can reconstruct the HDR image (ie, for example, an 8- or 12-bit interpolation approximation aimed at a 0.1-5000nit screen interpretation) of the image Encoded 8-bit LDR, reversing the co-encoded ones (without needing any post-significant correction), ie the HDR image will appear psychovisually (almost) indistinguishable, or at least it will still be a good HDR image (ie it typically shows the appearance of the HDR scene, approximating how the HDR would be interpreted if it were generated directly from the original, eg 12-bit HDR IM_HDR image, with its HDR Range HDR_Rng being interpreted luminances). But on the other hand, we want an LDR image, that is, if the 8-bit signal is directly applied to an LDR screen of, say, 0.1-400 nit, that still allows every good visual interpretation. For example, one can just linearly compress the HDR IM_HDR image to the LDR LDR_Rng range, for example, by dropping the least significant bits, and assuming white (maximum code value 255) is directed to be interpreted at 400 nit. However, because these HDR images typically contain very bright objects at the top of their lumen range, such an 8-bit image will look very dark on an LDR screen as the relevant darker parts of the image/scene will now end up in the luma codes very low, ie screen output luminances. However, much improvement can already be obtained by applying an ideal gamma before encoding the HDR/12bit/5000nit image in classic LDR/8bit/400nit, eg AVC representation. That is, this gamma will map the bright objects into the lighter parts (eg making them less contrast and pastellish, but still acceptable on the LDR screen, still with enough information to do a reasonable reverse mapping in HDR again), optimally coordinated at the same time not pinching the darker parts (eg dark tree) too much, so these dark objects still look reasonably light on the LDR screen (and also a good dark part of HDR can be recreated for dark surrounding viewing; or data enough texture images are available for clearer encoding of these on the HDR screen).
[0358] In general such a mapping can be a generic global transformation in the lumas (that is, a mapping that does not consider the specific geometric location, such as where a pixel resides in the image, or what the lumas of its nearby pixels are, or what type of scene object belongs, but still only takes the pixel's luma value as input). Somehow, more complex mappings can be co-coded, such as a transformation just for a demarcated sub-region or object in the image (local mapping, in this case typically more information is co-coded as defining the object boundary). But overall, while one could envision any transformation to work with our revealed achievements, it would just be to reduce the amount of work typically a human grader in defining these ideal mappings, they typically being few and simple (no local mapping will be coded if a function global global as an S-curve or multipoint is sufficient).
[0359] We clarified the example with an image encoding apparatus on the content creator side 510, with human-optimized encoding of the output image typically being an 8-bit LDR Im_1 image (as typically encompassed with the transform/mapping functions or algorithm strategies such as MET metadata in some S structure of the image signal structure as prescribed in AVC or HEVC) in a memory (such as a 511 blu-ray disk, or a temporary memory, for last encoding into a signal to be stored or broadcast). This scaler can check the image on one or more screens 530, for example by checking that the LDR image and the recoverable HDR image look good on the respective LDR reference and HDR screens, before sending its instructions to the image encoding unit 550 (the which maps to 8-bit lumber) and the 554 formatter, which finalizes the image and its color codes according to the currently used image encoding standard, and co-encodes the texture image with the transformation metadata into an output 512.
[0360] At the top we see how the HDR image IM_HDR (which is input through an input 511 of the image encoding device 510) with its HDR Range are mapped to the LDR image with its LDR range of the luminances interpreted if on a screen LDR.
[0361] Although we have clarified "HDR_encoded_as_LDR" with an encoding on a content creation side for transmission on a content usage side such as a consumer's home, the same realizations of "HDR_encoded_as_LDR" can also be used obviously when transmit (for example, by transcoding) between different devices, such as two home devices on a home network. So, for example, an automatic image analysis and mapping unable to apply an automatic image analysis and a corresponding luma mapping method. This can be done, for example, by a content receiver device or store taking a first representation of the image, such as a 12-bit HDR image, and sending it over an HDMI or other network connection, to a television. Or the 8-bit LDR image can be encoded according to or to a wireless standard, to transmit on a mobile screen, with HDR capabilities, yet of lower visual quality anyway.
[0362] By HDR screen we mean a maximum brightness screen greater than 750 nits, screens with lower maximum brightness, and especially below 500 nits being LDR screens.
[0363] The predetermined quality criteria for judging whether the LDR interpretation, and the HDR interpretation of an HDR signal retrieved from the LDR image (typically derived only by inverting the co-coded mappings, but some processing can be done, such as the side receiver of the device can apply a quantization threshold mitigating image processing, for example), it will be both a mathematical algorithm, and the human operator judging it is good enough at encoding the image encodings for distribution. Both human-applied and software-encoded estimators will apply these image analysis criteria as: there is sufficient (local) contrast in multiple regions (ie still retaining enough of the visibility of the original, eg, negative celluloid scan of the image 12-bit or 14-bit HDR), particularly the central regions in the image, there are many artifacts such as quantization thresholds, and however large or wide the steps are, there are enough spatial submodes of the luminance histogram (is the original kinematic appearance/retained intent ), in particular, have enough spatially separate inter-region contrast objects, etc. And in particular, if originals are present, for example, in a networked system of connected appliances, the sending appliance (for example, a settopbox ) judging whether the recoverable HDR signal is close enough to the original, eg 12-bit HDR signal present at this location (which can be done based on these criteria m athematics such as MSE or PSNR, or psychovisually weighted differences, etc.).
[0364] This signal has the advantage that any HDR compliant system knows that we actually have an HDR image encoded as an LDR, and can ideally retrieve the HDR image before interpretation, still backward compatible, in legacy LDR systems they can also directly use the LDR image for interpretation.
[0365] It will be noted that the above description for clarification described embodiments of the invention with reference to different functional circuits, units and processors. However, it will be evident that any suitable distribution of functionality between different functional circuits, units or processors can be utilized without detracting from the invention. For example, functionality illustrated to be performed by separate processors or controllers may be performed by the same processor or controller. Thus, references to specific functional units or circuits should only be seen as references to adequate means of providing the described functionality rather than indicative of a rigorous logical or physical structure or organization.
[0366] All realizations and teachings of the method correspond to the corresponding apparatus, and potentially other products such as output signals, realizations, and vice versa. The invention may be implemented in any suitable way including hardware, software, firmware or any combination thereof. The invention may optionally be implemented at least partially as computer software that runs on one or more data processors and/or digital signal processors. The elements and components of an embodiment of the invention can be physically, functionally and logically implemented in any suitable way. In fact, functionality can be implemented in a single unit, in a plurality of units, or as part of other functional units. Thus, the invention can be implemented in a single unit or it can be physically and functionally distributed among different units, circuits and processors.
[0367] Although the present invention has been described in connection with some embodiments, it is not intended to be limited to the set specifically mentioned herein. Furthermore, the scope of the present invention is limited only by the appended claims. Additionally, although a feature may appear to be described in connection with the particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in accordance with the invention. In the claims, the term understand does not exclude the presence of other elements or steps.
[0368] Furthermore, although individually listed, a plurality of means, elements, circuits or method steps can be implemented, for example, by a single circuit, unit or processor. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and an inclusion in different claims does not mean that a combination of features is not practicable and/or advantageous. Yet the inclusion of a feature in a category in accordance with the claims does not mean a limitation to that category, but still indicates that the feature is equally applicable to other categories of the claim as appropriate. Furthermore, the order of features in the claims does not mean any specific order in which the features are to be worked on and in particular the order of individual steps in a method claim does not mean that the steps are to be carried out in that order. Also, the steps can be performed in any suitable order. Furthermore, singular references do not exclude a plurality. Then references to “one”, “one”, 10 “first”, “second” etc. they do not exclude a plurality. The reference signs in the claims are provided merely as an example of clarification and should not be construed as limiting the scope of the claims in any way. 15
权利要求:
Claims (20)
[0001]
1. IMAGE PROCESSING APPARATUS, characterized in that it comprises: a receiver (201) for receiving an image signal, the image signal comprising at least one encoded image; a second receiver (1701) for receiving a data signal from a screen (107), the data signal comprising a data field comprising an indication of the dynamic range of the screen of the screen (107), the indication of the dynamic range of the screen comprising a white point luminance, this white point luminance can verify that the screen is a high dynamic range screen, by the white point luminance can have a value of 1000 nits, or above; where the indication of the dynamic range comprises an Electro-Optical Transfer Function for the screen; a dynamic range processor (203) arranged to generate an output image by applying a dynamic range transform to the encoded image in response to the received screen dynamic range indication, the dynamic range processor (203) being arranged to be able to apply the transform dynamic range to image signal being a high dynamic range image signal corresponding to an absolute maximum luminance of the target screen above 500 nit; and an output (205) for outputting an output image signal comprising the output image on the screen.
[0002]
2. IMAGE PROCESSING APPARATUS, according to claim 1, in which the indication of the dynamic range of the screen is further characterized by comprising a black point luminance.
[0003]
3. IMAGE PROCESSING APPARATUS according to any one of claims 1 or 2, wherein the indication of the dynamic range of the screen is characterized by comprising mapping data representing a screen mapping of screen input values in a range luminance dynamics of the screen.
[0004]
4. IMAGE PROCESSING APPARATUS according to any one of claims 1 to 3, wherein the data signal is characterized in that it comprises a plurality of dynamic luminance ranges; and wherein the dynamic range processor (203) is arranged to select a luminance dynamic range from the plurality of luminance dynamic ranges, and to perform the dynamic range transform in response to the selected luminance dynamic range.
[0005]
5. IMAGE PROCESSING APPARATUS, according to claim 4, characterized in that the plurality of dynamic luminance ranges refer to different types of images.
[0006]
6. IMAGE PROCESSING APPARATUS according to any one of claims 1 to 4, further characterized in that it comprises an output of a controller (2201) for outputting a control data signal on the screen, the control data signal comprising an indication of a dynamic range of luminance to be used by the screen.
[0007]
7. IMAGE PROCESSING APPARATUS according to claim 6, wherein the control data signal is characterized by comprising an image processing instruction for the screen.
[0008]
8. IMAGE PROCESSING APPARATUS according to claim 7, wherein the image processing instruction is characterized by comprising a tone-to-screen mapping indication.
[0009]
9. IMAGE PROCESSING APPARATUS according to any one of claims 1 to 4, in which the display's dynamic range indication is characterized by comprising an ambient light indication, and in which the dynamic range processor (203) is arranged to adapt the dynamic range transform in response to ambient light indication.
[0010]
10. IMAGE PROCESSING APPARATUS, according to any one of claims 1 to 4, characterized in that the indication of the dynamic range of the screen is dependent on a user-selectable screen setting.
[0011]
11. IMAGE PROCESSING APPARATUS, according to any one of claims 1 to 4, in which the indication of the dynamic range of the screen is characterized by comprising a dynamic range control data transform; and wherein the dynamic range processor (203) is further arranged to perform the dynamic range transform in response to the dynamic range control data transform.
[0012]
12. IMAGE PROCESSING APPARATUS according to claim 1, characterized in that the receiver (201) is further arranged to receive a target screen reference, the target screen reference being indicative of a dynamic range of a target screen for which the encoded image is encoded; and wherein the dynamic range processor (203) is arranged to apply the dynamic range transform to the encoded image in response to the target screen reference.
[0013]
13. IMAGE PROCESSING APPARATUS according to any one of claims 1 to 3, characterized in that the dynamic range processor (203) is arranged to select between generating an output image as an encoded image and generating an output image as a Transformed image of the first encoded image in response to the display's dynamic range indication.
[0014]
14. IMAGE PROCESSING APPARATUS according to any one of claims 1 to 3, wherein the dynamic range transform is characterized in that it comprises a gamma transform.
[0015]
15. IMAGE PROCESSING APPARATUS according to any one of claims 1 to 3, further characterized in that it comprises a control data transmitter for transmitting the dynamic range control data to a source of the image signal.
[0016]
16. SCREEN, characterized in that it comprises: a receiver (2101) for receiving an image signal representing at least one image; a display panel (2105); a screen driver (2103) for driving the image signal display panel; and a transmitter (2107) for transmitting a data signal to an image signal source, the data signal comprising a data field including an indication of the dynamic range of the screen (107), the indication of the dynamic range of the screen comprising a white point luminance, this white point luminance being able to indicate whether the screen is a high dynamic range screen, the screen having a white point luminance of at least 1000 nits; wherein the display's dynamic range indication also comprises an Electro-Optical Transfer Function to the display.
[0017]
17. SCREEN according to claim 16, wherein the indication of the dynamic range of the screen is further characterized by comprising a black point luminance.
[0018]
18. SCREEN, according to any one of claims 16 or 17, wherein the indication of the dynamic range of the screen is characterized in that it comprises an indication of ambient light.
[0019]
19. SCREEN according to any one of claims 16 to 18, further characterized in that it comprises a second receiver for receiving a control data signal from a source of the image signal, the control data signal comprising an indication a dynamic range of luminance to be used by the screen; and wherein the trigger is arranged to adapt the trigger in response to the dynamic range of luminance.
[0020]
20. IMAGE PROCESSING METHOD, characterized in that it comprises: receiving an image signal, the image signal comprising at least one encoded image; receiving a data signal from a screen (107), the data signal comprising a data field comprising an indication of the dynamic range of the screen of the screen (107), the indication of the dynamic range of the screen comprising at least one luminance of white point, this white point luminance being able to indicate whether the screen is a high dynamic range screen, and the white point luminance possibly having a value of 1000 nit or above where the dynamic range indication comprises an Electro-Transfer Function. Optics for the screen; generating an output image by applying a dynamic range transform to the encoded image being a high dynamic range image signal corresponding to an absolute maximum luminance of the target screen above 500 nits in response to the indication of the dynamic range of the screen; and outputting an output image signal comprising the output image on the screen.
类似技术:
公开号 | 公开日 | 专利标题
BR112014006978B1|2021-08-31|IMAGE PROCESSING APPARATUS, SCREEN AND IMAGE PROCESSING METHOD.
JP6596125B2|2019-10-23|Method and apparatus for creating a code mapping function for encoding of HDR images, and method and apparatus for use of such encoded images
US11183143B2|2021-11-23|Transitioning between video priority and graphics priority
JP6495552B2|2019-04-03|Dynamic range coding for images and video
KR102135841B1|2020-07-22|High dynamic range image signal generation and processing
JP6009539B2|2016-10-19|Apparatus and method for encoding and decoding HDR images
同族专利:
公开号 | 公开日
CN103827956B|2017-06-13|
RU2014116971A|2015-11-10|
US20190172187A1|2019-06-06|
CA2850037A1|2013-04-04|
AU2012313936A1|2014-05-15|
AU2017235970B2|2019-08-08|
AU2017235970A1|2017-10-19|
JP6430577B2|2018-11-28|
US10916000B2|2021-02-09|
ZA201403081B|2015-11-25|
MX2014003554A|2014-06-05|
JP6407717B2|2018-10-17|
RU2640750C2|2018-01-11|
RU2017137235A3|2021-02-25|
AU2017235969A1|2017-10-19|
CN103827956A|2014-05-28|
UA116082C2|2018-02-12|
EP2745290A1|2014-06-25|
JP2014532195A|2014-12-04|
CN103843058A|2014-06-04|
PE20141864A1|2014-12-12|
CA2850031A1|2013-04-04|
AU2012313935A1|2014-05-15|
US20190156471A1|2019-05-23|
AU2012313935B9|2017-05-04|
KR101972748B1|2019-08-16|
JP6509262B2|2019-05-08|
JP2014531821A|2014-11-27|
RU2017137235A|2019-02-11|
BR112014006977A2|2017-04-04|
PH12017502053A1|2018-04-23|
JP2017117476A|2017-06-29|
JP6133300B2|2017-05-24|
CN107103588A|2017-08-29|
AU2017235969B2|2019-08-08|
JP2017184238A|2017-10-05|
US20140225941A1|2014-08-14|
IN2014CN01863A|2015-05-29|
RU2643485C2|2018-02-01|
WO2013046096A1|2013-04-04|
WO2013046095A1|2013-04-04|
EP2745507A1|2014-06-25|
ZA201403080B|2015-11-25|
BR112014006978A2|2017-04-04|
RU2761120C2|2021-12-06|
AU2012313936B2|2017-06-29|
US20140210847A1|2014-07-31|
KR20140066771A|2014-06-02|
MX2014003556A|2014-05-28|
AU2012313935B2|2017-04-20|
CN103843058B|2016-11-23|
RU2014116969A|2015-11-10|
US20210142453A1|2021-05-13|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP3271487B2|1995-08-30|2002-04-02|凸版印刷株式会社|Image database with image characteristics and image processing device|
EP0856829B1|1997-01-31|2008-09-24|Hitachi, Ltd.|Image displaying system and information processing apparatus with control of display attributes specific for a defined display region|
GB2335326B|1997-10-31|2002-04-17|Sony Corp|Image processing apparatus and method and providing medium.|
JP2002149149A|2000-11-13|2002-05-24|Sony Corp|Color calibration method and apparatus|
JP4155723B2|2001-04-16|2008-09-24|富士フイルム株式会社|Image management system, image management method, and image display apparatus|
DE10213514C1|2002-03-26|2003-11-27|Michael Kaatze|coffee brewer|
US20040061709A1|2002-09-17|2004-04-01|Lg Electronics Inc.|Method and apparatus for driving plasma display panel|
KR100712334B1|2002-09-30|2007-05-02|엘지전자 주식회사|Method for controling a brightness level of LCD|
JP3661692B2|2003-05-30|2005-06-15|セイコーエプソン株式会社|Illumination device, projection display device, and driving method thereof|
JP3975357B2|2003-06-12|2007-09-12|船井電機株式会社|LCD television equipment|
JP2005086226A|2003-09-04|2005-03-31|Auto Network Gijutsu Kenkyusho:Kk|Imaging unit|
JP4617085B2|2004-02-16|2011-01-19|キヤノン株式会社|Image display device and image display method|
US8218625B2|2004-04-23|2012-07-10|Dolby Laboratories Licensing Corporation|Encoding, decoding and representing high dynamic range images|
US7085414B2|2004-05-05|2006-08-01|Canon Kabushiki Kaisha|Characterization of display devices by averaging chromaticity values|
CN1951101A|2004-05-11|2007-04-18|皇家飞利浦电子股份有限公司|Method for processing color image data|
ES2551561T3|2006-01-23|2015-11-19|MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V.|High dynamic range codecs|
US8014445B2|2006-02-24|2011-09-06|Sharp Laboratories Of America, Inc.|Methods and systems for high dynamic range video coding|
JP4203081B2|2006-05-19|2008-12-24|株式会社東芝|Image display device and image display method|
US8872753B2|2006-08-31|2014-10-28|Ati Technologies Ulc|Adjusting brightness of a display image in a display having an adjustable intensity light source|
JP4687918B2|2007-07-24|2011-05-25|富士ゼロックス株式会社|Image processing apparatus and program|
US8330768B2|2007-07-27|2012-12-11|Sharp Laboratories Of America, Inc.|Apparatus and method for rendering high dynamic range images for standard dynamic range display|
US8135230B2|2007-07-30|2012-03-13|Dolby Laboratories Licensing Corporation|Enhancing dynamic ranges of images|
US8233738B2|2007-07-30|2012-07-31|Dolby Laboratories Licensing Corporation|Enhancing dynamic ranges of images|
US8223113B2|2007-12-26|2012-07-17|Sharp Laboratories Of America, Inc.|Methods and systems for display source light management with variable delay|
WO2009095733A1|2008-01-31|2009-08-06|Thomson Licensing|Method and system for look data definition and transmission|
JP4544319B2|2008-03-11|2010-09-15|富士フイルム株式会社|Image processing apparatus, method, and program|
CN101542582B|2008-05-08|2011-12-14|香港应用科技研究院有限公司|Method and apparatus for enhancing the dynamic range of an image|
JP5690267B2|2008-08-22|2015-03-25|トムソン ライセンシングThomson Licensing|Method and system for content delivery|
JP2010114839A|2008-11-10|2010-05-20|Canon Inc|Image processing device and image processing method|
US8831343B2|2009-01-19|2014-09-09|Dolby Laboratories Licensing Corporation|Image processing and displaying methods for devices that implement color appearance models|
RU2533855C2|2009-03-06|2014-11-20|Конинклейке Филипс Электроникс Н.В.|Method of converting input image data into output image data, image conversion unit for converting input image data into output image data, image processing device, display device|
WO2010105036A1|2009-03-13|2010-09-16|Dolby Laboratories Licensing Corporation|Layered compression of high dynamic range, visual dynamic range, and wide color gamut video|
EP2230839A1|2009-03-17|2010-09-22|Koninklijke Philips Electronics N.V.|Presentation of video content|
WO2010128962A1|2009-05-06|2010-11-11|Thomson Licensing|Methods and systems for delivering multimedia content optimized in accordance with presentation device capabilities|
WO2010132237A1|2009-05-11|2010-11-18|Dolby Laboratories Licensing Corporation|Light detection, color appearance models, and modifying dynamic range for image display|
CN101930719A|2009-06-18|2010-12-29|辉达公司|Method and system for automatically switching scene mode of display|
JP2011010108A|2009-06-26|2011-01-13|Seiko Epson Corp|Imaging control apparatus, imaging apparatus, and imaging control method|
EP2539884B1|2010-02-24|2018-12-12|Dolby Laboratories Licensing Corporation|Display management methods and apparatus|
ES2556383T3|2010-03-03|2016-01-15|Koninklijke Philips N.V.|Apparatus and procedures for defining color regimes|
US20110242142A1|2010-03-30|2011-10-06|Ati Technologies Ulc|Multiple display chrominance and luminance method and apparatus|
CN101888487B|2010-06-02|2012-03-14|中国科学院深圳先进技术研究院|High dynamic range video imaging system and image generating method|
EP2580748B1|2010-06-14|2022-02-23|Barco NV|Luminance boost method and system|
US9300938B2|2010-07-22|2016-03-29|Dolby Laboratories Licensing Corporation|Systems, apparatus and methods for mapping between video ranges of image data and display|
US9509935B2|2010-07-22|2016-11-29|Dolby Laboratories Licensing Corporation|Display management server|
US8525933B2|2010-08-02|2013-09-03|Dolby Laboratories Licensing Corporation|System and method of creating or approving multiple video streams|
US9549197B2|2010-08-16|2017-01-17|Dolby Laboratories Licensing Corporation|Visual dynamic range timestamp to enhance data coherency and potential of metadata using delay information|
TWI479898B|2010-08-25|2015-04-01|Dolby Lab Licensing Corp|Extending image dynamic range|
US9451292B2|2011-09-15|2016-09-20|Dolby Laboratories Licensing Corporation|Method and system for backward compatible, extended dynamic range encoding of video|JP2638264B2|1990-07-30|1997-08-06|ヤマハ株式会社|Electronic musical instrument controller|
US8988552B2|2011-09-26|2015-03-24|Dolby Laboratories Licensing Corporation|Image formats and related methods and apparatuses|
US10242650B2|2011-12-06|2019-03-26|Dolby Laboratories Licensing Corporation|Perceptual luminance nonlinearity-based image data exchange across different display capabilities|
KR101812469B1|2011-12-06|2017-12-27|돌비 레버러토리즈 라이쎈싱 코오포레이션|Method of improving the perceptual luminance nonlinearity-based image data exchange across different display capabilities|
AR091515A1|2012-06-29|2015-02-11|Sony Corp|DEVICE AND METHOD FOR IMAGE PROCESSING|
KR102037716B1|2012-11-23|2019-10-30|삼성디스플레이 주식회사|Method of storing gamma data in a display device, display device and method of operating a display device|
US9332215B1|2013-01-23|2016-05-03|Rockwell Collins, Inc.|System and method for displaying to airborne personnel mission critical high dynamic range video on a low resolution display|
JP6420540B2|2013-02-04|2018-11-07|キヤノン株式会社|Image processing apparatus, control method therefor, program, and storage medium|
WO2014130343A2|2013-02-21|2014-08-28|Dolby Laboratories Licensing Corporation|Display management for high dynamic range video|
US10055866B2|2013-02-21|2018-08-21|Dolby Laboratories Licensing Corporation|Systems and methods for appearance mapping for compositing overlay graphics|
JP6104411B2|2013-02-21|2017-03-29|ドルビー ラボラトリーズ ライセンシング コーポレイション|Appearance mapping system and apparatus for overlay graphics synthesis|
CN109064433A|2013-02-21|2018-12-21|皇家飞利浦有限公司|Improved HDR image coding and decoding methods and equipment|
CN110418166A|2013-04-30|2019-11-05|索尼公司|Sending device, sending method, receiving device and method of reseptance|
CN105324997B|2013-06-17|2018-06-29|杜比实验室特许公司|For enhancing the adaptive shaping of the hierarchical coding of dynamic range signal|
EP3013040A4|2013-06-20|2016-12-28|Sony Corp|Reproduction device, reproduction method, and recording medium|
JP2015005878A|2013-06-20|2015-01-08|ソニー株式会社|Reproduction device, reproduction method and recording medium|
TWI711310B|2013-06-21|2020-11-21|日商新力股份有限公司|Transmission device, high dynamic range image data transmission method, reception device, high dynamic range image data reception method and program|
JP2015008360A|2013-06-24|2015-01-15|ソニー株式会社|Reproducing apparatuses, reproducing method and recording medium|
JP2015008361A|2013-06-24|2015-01-15|ソニー株式会社|Reproducing apparatuses, reproducing method and recording medium|
CA2914986A1|2013-06-24|2014-12-31|Sony Corporation|Reproduction device, reproduction method, and recording medium|
JP6528683B2|2013-07-12|2019-06-12|ソニー株式会社|Reproducing apparatus, reproducing method|
KR20160031466A|2013-07-14|2016-03-22|엘지전자 주식회사|Method and apparatus for transmitting and receiving ultra high-definition broadcasting signal for expressing high-quality color in digital broadcasting system|
EP3022895B1|2013-07-18|2019-03-13|Koninklijke Philips N.V.|Methods and apparatuses for creating code mapping functions for encoding an hdr image, and methods and apparatuses for use of such encoded images|
WO2015007910A1|2013-07-19|2015-01-22|Koninklijke Philips N.V.|Hdr metadata transport|
KR102198673B1|2013-08-20|2021-01-05|소니 주식회사|Reproduction device, reproduction method, and recording medium|
US9264683B2|2013-09-03|2016-02-16|Sony Corporation|Decoding device and decoding method, encoding device, and encoding method|
WO2015034188A1|2013-09-06|2015-03-12|엘지전자 주식회사|Method and apparatus for transmitting and receiving ultra-high definition broadcasting signal for high dynamic range representation in digital broadcasting system|
KR102113178B1|2013-09-12|2020-05-21|삼성디스플레이 주식회사|Display apparatus and liquid crystal display apparatus|
CN105556606B|2013-09-27|2020-01-17|索尼公司|Reproducing apparatus, reproducing method, and recording medium|
US9460118B2|2014-09-30|2016-10-04|Duelight Llc|System, method, and computer program product for exchanging images|
US9460125B2|2013-09-30|2016-10-04|Duelight Llc|Systems, methods, and computer program products for digital photography|
ES2784691T3|2013-10-10|2020-09-29|Dolby Laboratories Licensing Corp|Viewing DCI and other content on an enhanced dynamic range projector|
JP6202330B2|2013-10-15|2017-09-27|ソニー株式会社|Decoding device and decoding method, and encoding device and encoding method|
CN106713697B|2013-10-22|2019-02-12|杜比实验室特许公司|Guidance color grading for extended dynamic range image|
US9554020B2|2013-11-13|2017-01-24|Dolby Laboratories Licensing Corporation|Workflow for content creation and guided display management of EDR video|
WO2015076608A1|2013-11-21|2015-05-28|엘지전자 주식회사|Video processing method and video processing apparatus|
EP3072288B1|2013-11-22|2019-06-12|Dolby Laboratories Licensing Corporation|Methods and systems for inverse tone mapping|
US10291827B2|2013-11-22|2019-05-14|Futurewei Technologies, Inc.|Advanced screen content coding solution|
CN105981380B|2013-12-18|2019-08-20|寰发股份有限公司|Utilize the method and apparatus of the encoded video data block of palette coding|
WO2015096812A1|2013-12-27|2015-07-02|Mediatek Inc.|Method and apparatus for palette coding with cross block prediction|
KR20160102438A|2013-12-27|2016-08-30|톰슨 라이센싱|Method and device for tone-mapping a high dynamic range image|
EP3087743A4|2013-12-27|2017-02-22|HFI Innovation Inc.|Method and apparatus for major color index map coding|
CA3020374C|2013-12-27|2021-01-05|Hfi Innovation Inc.|Method and apparatus for syntax redundancy removal in palette coding|
US10484696B2|2014-01-07|2019-11-19|Mediatek Inc.|Method and apparatus for color index prediction|
CA2936313A1|2014-01-24|2015-07-30|Sony Corporation|Transmission device, transmission method, reception device, and reception method|
KR102285955B1|2014-02-07|2021-08-05|소니그룹주식회사|Transmission device, transmission method, reception device, reception method, display device, and display method|
KR20150099672A|2014-02-22|2015-09-01|삼성전자주식회사|Electronic device and display controlling method of the same|
EP3111644A1|2014-02-25|2017-01-04|Apple Inc.|Adaptive transfer function for video encoding and decoding|
EP3111635B1|2014-02-27|2018-06-27|Dolby Laboratories Licensing Corporation|Systems and methods to control judder visibility|
JP6439418B2|2014-03-05|2018-12-19|ソニー株式会社|Image processing apparatus, image processing method, and image display apparatus|
EP3055830A4|2014-03-21|2017-02-22|Huawei Technologies Co., Ltd.|Advanced screen content coding with improved color table and index map coding methods|
CN106464966B|2014-05-12|2020-12-08|索尼公司|Communication apparatus, communication method, and computer-readable storage medium|
EP3145206B1|2014-05-15|2020-07-22|Sony Corporation|Communication apparatus, communication method, and computer program|
JP6751901B2|2014-05-16|2020-09-09|パナソニックIpマネジメント株式会社|Luminance conversion method, brightness conversion device and video display device|
WO2015174026A1|2014-05-16|2015-11-19|パナソニックIpマネジメント株式会社|Conversion method and conversion device|
KR101785671B1|2014-05-20|2017-11-06|엘지전자 주식회사|Method and apparatus for processing video data for display adaptive image reproduction|
US10091512B2|2014-05-23|2018-10-02|Futurewei Technologies, Inc.|Advanced screen content coding with improved palette table and index map coding methods|
WO2015180854A1|2014-05-28|2015-12-03|Koninklijke Philips N.V.|Methods and apparatuses for encoding an hdr images, and methods and apparatuses for use of such encoded images|
US9786251B1|2014-05-28|2017-10-10|Musco Corporation|Apparatus, method, and system for visually indicating perceived glare thresholds|
JP5948619B2|2014-06-10|2016-07-06|パナソニックIpマネジメント株式会社|Display system, display method, and display device|
JP6643669B2|2014-06-10|2020-02-12|パナソニックIpマネジメント株式会社|Display device and display method|
CN106105177B|2014-06-10|2019-09-27|松下知识产权经营株式会社|Transform method and converting means|
US20170064242A1|2014-06-13|2017-03-02|Sony Corporation|Transmission device, transmission method, reception device, and reception method|
WO2015194101A1|2014-06-16|2015-12-23|パナソニックIpマネジメント株式会社|Playback method and playback apparatus|
EP2958075A1|2014-06-20|2015-12-23|Thomson Licensing|Method and apparatus for dynamic range expansion of LDR video sequence|
CN105493490B|2014-06-23|2019-11-29|松下知识产权经营株式会社|Transform method and converting means|
WO2015198552A1|2014-06-25|2015-12-30|パナソニックIpマネジメント株式会社|Content data generating method, video stream transmission method and video display method|
EP3713247A1|2014-06-26|2020-09-23|Panasonic Intellectual Property Management Co., Ltd.|Data output device, data output method, and data generation method|
CN111901599A|2014-06-27|2020-11-06|松下知识产权经营株式会社|Reproducing apparatus|
MX2019005231A|2014-06-27|2019-09-10|Panasonic Ip Man Co Ltd|Data output device, data output method, and data generation method.|
WO2016002154A1|2014-06-30|2016-01-07|パナソニックIpマネジメント株式会社|Data reproduction method and reproduction device|
MX2019008149A|2014-06-30|2019-09-05|Panasonic Ip Man Co Ltd|Data reproduction method and reproduction device.|
US9613407B2|2014-07-03|2017-04-04|Dolby Laboratories Licensing Corporation|Display management for high dynamic range video|
JP6478499B2|2014-07-07|2019-03-06|キヤノン株式会社|Image processing apparatus, image processing method, and program|
JP6421504B2|2014-07-28|2018-11-14|ソニー株式会社|Image processing apparatus and image processing method|
JP6466258B2|2014-08-07|2019-02-06|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM|
WO2016021120A1|2014-08-07|2016-02-11|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ|Reproduction device, reproduction method, and recording medium|
CN107005720B|2014-08-08|2020-03-06|皇家飞利浦有限公司|Method and apparatus for encoding HDR images|
JP2016538745A|2014-08-08|2016-12-08|エルジー エレクトロニクス インコーポレイティド|Video data processing method and apparatus for display adaptive video playback|
TWI539433B|2014-08-13|2016-06-21|友達光電股份有限公司|Curved display apparatus and gamma correction method thereof|
MX2019008379A|2014-08-19|2019-09-09|Panasonic Ip Man Co Ltd|Transmission method, reproduction method and reproduction device.|
WO2016027426A1|2014-08-19|2016-02-25|パナソニックIpマネジメント株式会社|Video stream generation method, playback apparatus, and recording medium|
CN110460792B|2014-08-19|2022-03-08|松下知识产权经营株式会社|Reproducing method and reproducing apparatus|
JP6331882B2|2014-08-28|2018-05-30|ソニー株式会社|Transmitting apparatus, transmitting method, receiving apparatus, and receiving method|
TWI683307B|2014-09-08|2020-01-21|日商新力股份有限公司|Information processing device, information recording medium, information processing method and program|
JP2016058848A|2014-09-08|2016-04-21|ソニー株式会社|Image processing system and image processing method|
JP6134076B2|2014-09-08|2017-05-24|ソニー株式会社|Information processing apparatus, information recording medium, information processing method, and program|
JPWO2016038791A1|2014-09-10|2017-06-22|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Recording medium, reproducing apparatus and reproducing method|
EP3193326B1|2014-09-11|2019-05-15|Sony Corporation|Image-processing device, and image-processing method|
WO2016040906A1|2014-09-11|2016-03-17|Grundy Kevin Patrick|System and method for controlling dynamic range compression image processing|
EP3196881B1|2014-09-12|2021-12-22|Sony Group Corporation|Playback device, playback method, information processing device, information processing method, program, and recording medium|
WO2016039170A1|2014-09-12|2016-03-17|ソニー株式会社|Information processing device, information processing method, program, and recording medium|
WO2016039169A1|2014-09-12|2016-03-17|ソニー株式会社|Playback device, playback method, information processing device, information processing method, program, and recording medium|
JP2016062637A|2014-09-12|2016-04-25|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Recording medium, reproduction apparatus and reproducing method|
CN111933189B|2014-09-12|2022-01-04|松下电器(美国)知识产权公司|Reproduction device and reproduction method|
JP5995129B2|2014-09-22|2016-09-21|パナソニックIpマネジメント株式会社|Reproduction method and reproduction apparatus|
MX367898B|2014-09-22|2019-09-11|Panasonic Ip Man Co Ltd|Playback method and playback device.|
US9729801B2|2014-10-02|2017-08-08|Dolby Laboratories Licensing Corporation|Blending images using mismatched source and display electro-optical transfer functions|
JP6510039B2|2014-10-02|2019-05-08|ドルビー ラボラトリーズ ライセンシング コーポレイション|Dual-end metadata for judder visibility control|
JP6539032B2|2014-10-06|2019-07-03|キヤノン株式会社|Display control apparatus, display control method, and program|
US10313687B2|2014-10-10|2019-06-04|Koninklijke Philips N.V.|Saturation processing specification for dynamic range mappings|
KR20160044954A|2014-10-16|2016-04-26|삼성전자주식회사|Method for providing information and electronic device implementing the same|
JP2016081553A|2014-10-17|2016-05-16|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Record medium, reproducing method and reproduction apparatus|
US9448771B2|2014-10-17|2016-09-20|Duelight Llc|System, computer program product, and method for generating a lightweight source code for implementing an image processing pipeline|
WO2016063474A1|2014-10-21|2016-04-28|パナソニックIpマネジメント株式会社|Reproduction device, display device, and transmission method|
TWI685837B|2014-10-23|2020-02-21|日商新力股份有限公司|Information processing device, information processing method, program product, and recording medium|
WO2016063475A1|2014-10-24|2016-04-28|パナソニックIpマネジメント株式会社|Transmission method and reproduction device|
EP3213291B1|2014-10-27|2019-11-06|Dolby Laboratories Licensing Corporation|Content mapping using extended color range|
EP3217656B1|2014-11-04|2021-03-03|Panasonic Intellectual Property Corporation of America|Reproduction method, reproduction device, and program|
PL3217672T3|2014-11-07|2021-08-16|Sony Corporation|Transmission device, transmission method, reception device, and reception method|
JP2016100039A|2014-11-17|2016-05-30|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Recording medium, playback method, and playback device|
US9508133B2|2014-11-18|2016-11-29|Duelight Llc|System and method for generating an image result based on availability of a network resource|
EP3029925A1|2014-12-01|2016-06-08|Thomson Licensing|A method and device for estimating a color mapping between two different color-graded versions of a picture|
JP6601729B2|2014-12-03|2019-11-06|パナソニックIpマネジメント株式会社|Data generation method, data reproduction method, data generation device, and data reproduction device|
CN112492318B|2014-12-03|2022-02-18|松下知识产权经营株式会社|Data generating device|
KR20160067275A|2014-12-03|2016-06-14|삼성디스플레이 주식회사|Display device and method of driving a display device|
JP6741975B2|2014-12-09|2020-08-19|パナソニックIpマネジメント株式会社|Transmission method and transmission device|
WO2016092759A1|2014-12-09|2016-06-16|パナソニックIpマネジメント株式会社|Transmission method, reception method, transmitting device, and receiving device|
ES2825699T3|2014-12-11|2021-05-17|Koninklijke Philips Nv|High dynamic range imaging and optimization for home screens|
CN107111980B|2014-12-11|2021-03-09|皇家飞利浦有限公司|Optimizing high dynamic range images for specific displays|
TW201633779A|2014-12-16|2016-09-16|湯姆生特許公司|Method and device of converting a HDR version of a picture to a SDR version of said picture|
EP3035678A1|2014-12-16|2016-06-22|Thomson Licensing|Method and device of converting a high-dynamic-range version of a picture to a standard-dynamic-range version of said picture|
US10741211B2|2014-12-22|2020-08-11|Sony Corporation|Information processing device, information recording medium, and information processing method|
CN112383695A|2014-12-29|2021-02-19|索尼公司|Transmitting apparatus, receiving apparatus and receiving method|
PL3248367T3|2015-01-19|2018-12-31|Dolby Laboratories Licensing Corporation|Display management for high dynamic range video|
BR112017016147A2|2015-01-27|2018-04-17|Thomson Licensing|methods, systems and apparatus for electro-optical and opto-electrical conversion of images and video|
EP3251337A1|2015-01-29|2017-12-06|Koninklijke Philips N.V.|Local dynamic range adjustment color processing|
CN111654697A|2015-01-30|2020-09-11|交互数字Vc控股公司|Method and apparatus for encoding and decoding color picture|
EP3051821A1|2015-01-30|2016-08-03|Thomson Licensing|Method and apparatus for encoding and decoding high dynamic rangevideos|
EP3051825A1|2015-01-30|2016-08-03|Thomson Licensing|A method and apparatus of encoding and decoding a color picture|
KR102295970B1|2015-02-06|2021-08-30|엘지전자 주식회사|Image display apparatus|
GB2534929A|2015-02-06|2016-08-10|British Broadcasting Corp|Method and apparatus for conversion of HDR signals|
JP6702300B2|2015-03-05|2020-06-03|ソニー株式会社|Transmission device, transmission method, reception device, and reception method|
EP3067882A1|2015-03-10|2016-09-14|Thomson Licensing|Adaptive color grade interpolation method and device|
JP6463179B2|2015-03-17|2019-01-30|キヤノン株式会社|Signal processing apparatus, signal processing method, and imaging apparatus|
WO2016153896A1|2015-03-20|2016-09-29|Dolby Laboratories Licensing Corporation|Signal reshaping approximation|
JP2016178595A|2015-03-23|2016-10-06|シャープ株式会社|Receiver, reception method and program|
WO2016152684A1|2015-03-24|2016-09-29|ソニー株式会社|Transmission device, transmission method, reception device, and reception method|
US10097886B2|2015-03-27|2018-10-09|Panasonic Intellectual Property Management Co., Ltd.|Signal processing device, record/replay device, signal processing method, and program|
US10109228B2|2015-04-10|2018-10-23|Samsung Display Co., Ltd.|Method and apparatus for HDR on-demand attenuation control|
KR102322709B1|2015-04-29|2021-11-08|엘지디스플레이 주식회사|Image processing method, image processing circuit and display device using the same|
US10257526B2|2015-05-01|2019-04-09|Disney Enterprises, Inc.|Perceptual color transformations for wide color gamut video coding|
WO2016182307A1|2015-05-11|2016-11-17|삼성전자 주식회사|Image processing apparatus and image processing method based on metadata|
JP6731722B2|2015-05-12|2020-07-29|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Display method and display device|
WO2016181819A1|2015-05-12|2016-11-17|ソニー株式会社|Image-processing device, image-processing method, and program|
WO2016181584A1|2015-05-12|2016-11-17|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ|Display method and display device|
JP6980858B2|2015-05-12|2021-12-15|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Display method and display device|
KR102337159B1|2015-05-21|2021-12-08|삼성전자주식회사|Apparatus and method for outputting content, and display apparatus|
JP6663214B2|2015-05-26|2020-03-11|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Display method and display device|
JP6666022B2|2015-06-04|2020-03-13|キヤノン株式会社|Image display device, image output device, and control method thereof|
KR102059256B1|2015-06-05|2019-12-24|애플 인크.|Render and display HDR content|
US10007412B2|2015-06-24|2018-06-26|Samsung Electronics Co., Ltd.|Tone mastering system with creative intent metadata|
CN107736017B|2015-06-25|2020-07-28|三菱电机株式会社|Video playback device and video playback method|
US11245939B2|2015-06-26|2022-02-08|Samsung Electronics Co., Ltd.|Generating and transmitting metadata for virtual reality|
GB2539917B|2015-06-30|2021-04-07|British Broadcasting Corp|Method and apparatus for conversion of HDR signals|
EP3314893A1|2015-06-30|2018-05-02|Dolby Laboratories Licensing Corporation|Real-time content-adaptive perceptual quantizer for high dynamic range images|
EP3113496A1|2015-06-30|2017-01-04|Thomson Licensing|Method and device for encoding both a hdr picture and a sdr picture obtained from said hdr picture using color mapping functions|
EP3113495A1|2015-06-30|2017-01-04|Thomson Licensing|Methods and devices for encoding and decoding a hdr color picture|
JP6611494B2|2015-07-08|2019-11-27|キヤノン株式会社|Image display apparatus and control method thereof|
KR102309676B1|2015-07-24|2021-10-07|삼성전자주식회사|User adaptive image compensator|
WO2017022513A1|2015-07-31|2017-02-09|ソニー株式会社|Video signal processing device, method for processing video signal, and display device|
US10885614B2|2015-08-19|2021-01-05|Samsung Electronics Co., Ltd.|Electronic device performing image conversion, and method thereof|
JP2018530942A|2015-08-24|2018-10-18|トムソン ライセンシングThomson Licensing|Encoding and decoding methods and corresponding devices|
WO2017032822A1|2015-08-25|2017-03-02|Thomson Licensing|Inverse tone mapping based on luminance zones|
WO2017040237A1|2015-08-28|2017-03-09|Arris Enterprises Llc|Color volume transforms in coding of high dynamic range and wide color gamut sequences|
KR20180048627A|2015-08-31|2018-05-10|톰슨 라이센싱|Method and apparatus for reverse tone mapping|
WO2017037971A1|2015-09-01|2017-03-09|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ|Conversion method and conversion apparatus|
EP3352467A4|2015-09-18|2019-05-15|Sharp Kabushiki Kaisha|Reception device, reception method, and program|
CN108141508B|2015-09-21|2021-02-26|杜比实验室特许公司|Imaging device and method for generating light in front of display panel of imaging device|
JP6825568B2|2015-09-25|2021-02-03|ソニー株式会社|Image processing device and image processing method|
JP6830190B2|2015-10-07|2021-02-17|パナソニックIpマネジメント株式会社|Video transmission method, video reception method, video transmission device and video reception device|
WO2017061071A1|2015-10-07|2017-04-13|パナソニックIpマネジメント株式会社|Video transmission method, video reception method, video transmission device, and video reception device|
US10140953B2|2015-10-22|2018-11-27|Dolby Laboratories Licensing Corporation|Ambient-light-corrected display management for high dynamic range images|
JP6872098B2|2015-11-12|2021-05-19|ソニーグループ株式会社|Information processing equipment, information recording media, information processing methods, and programs|
EP3169071B1|2015-11-16|2020-01-29|InterDigital VC Holdings, Inc.|Backward-compatible encoding of a hdr picture|
CN105336290B|2015-11-18|2018-06-01|青岛海信电器股份有限公司|Gamma Gamma bearing calibrations and device|
JP6831389B2|2015-11-24|2021-02-17|コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V.|Processing of multiple HDR image sources|
AU2015261734A1|2015-11-30|2017-06-15|Canon Kabushiki Kaisha|Method, apparatus and system for encoding and decoding video data according to local luminance intensity|
EP3367657B1|2015-12-23|2021-06-02|Huawei Technologies Co., Ltd.|Image signal conversion method and apparatus, and terminal device|
US10237525B2|2015-12-25|2019-03-19|Sharp Kabushiki Kaisha|Display device, method for controlling display device, control program, and recording medium|
US9984446B2|2015-12-26|2018-05-29|Intel Corporation|Video tone mapping for converting high dynamic rangecontent to standard dynamic rangecontent|
JP6710970B2|2015-12-28|2020-06-17|ソニー株式会社|Transmission device and transmission method|
JP6237797B2|2016-01-05|2017-11-29|ソニー株式会社|Video system, video processing method, program, and video converter|
KR20170088303A|2016-01-22|2017-08-01|한국전자통신연구원|Method and apparatus for image signal conversion to reduce artifacts|
US10679544B2|2016-01-29|2020-06-09|Barco Nv|Digital image processing chain and processing blocks and a display including the same|
WO2017138470A1|2016-02-09|2017-08-17|ソニー株式会社|Transmission device, transmission method, reception device and reception method|
US10264196B2|2016-02-12|2019-04-16|Contrast, Inc.|Systems and methods for HDR video capture with a mobile device|
US10257394B2|2016-02-12|2019-04-09|Contrast, Inc.|Combined HDR/LDR video streaming|
KR20180109845A|2016-02-12|2018-10-08|삼성전자주식회사|Display device and display method thereof|
JP2017151308A|2016-02-25|2017-08-31|キヤノン株式会社|Information processor and information processing method|
CN108701471A|2016-02-26|2018-10-23|索尼公司|Information processing unit, display device, information recording carrier, information processing method and program|
JP6824055B2|2016-03-02|2021-02-03|シャープ株式会社|Receiver and broadcast system|
JP6451669B2|2016-03-04|2019-01-16|ソニー株式会社|Evaluation apparatus, evaluation method, and camera system|
JP6161222B1|2016-03-17|2017-07-12|シャープ株式会社|Receiving apparatus and broadcasting system|
RU2728516C2|2016-03-18|2020-07-30|Конинклейке Филипс Н.В.|Hdr video encoding and decoding|
EP3220645A1|2016-03-18|2017-09-20|Thomson Licensing|A method and a device for encoding a high dynamic range picture, corresponding decoding method and decoding device|
JP6757157B2|2016-03-29|2020-09-16|キヤノン株式会社|Projection device and its control method|
JP6830244B2|2016-03-29|2021-02-17|パナソニックIpマネジメント株式会社|Display device and its control method|
US20170289571A1|2016-04-01|2017-10-05|Intel Corporation|Temporal control for spatially adaptive tone mapping of high dynamic range video|
GB2549521A|2016-04-21|2017-10-25|British Broadcasting Corp|Method and apparatus for conversion of dynamic range of video signals|
JP6755701B2|2016-04-25|2020-09-16|キヤノン株式会社|Imaging device, display device and image processing device|
EP3244616A1|2016-05-13|2017-11-15|Thomson Licensing|A method for encoding an input video comprising a luma component and two chroma components, the method comprising reshaping of said input video based on reshaping functions|
KR20170129004A|2016-05-16|2017-11-24|엘지전자 주식회사|Image processing device and image processing method thereof|
CN105979192A|2016-05-16|2016-09-28|福州瑞芯微电子股份有限公司|Video display method and device|
US10915999B2|2016-05-25|2021-02-09|Sony Corporation|Image processing apparatus, image processing method, and program|
US10692465B2|2016-05-27|2020-06-23|Dolby Laboratories Licensing Corporation|Transitioning between video priority and graphics priority|
US20170353704A1|2016-06-01|2017-12-07|Apple Inc.|Environment-Aware Supervised HDR Tone Mapping|
US10032263B2|2016-06-12|2018-07-24|Apple Inc.|Rendering information into images|
JP6729055B2|2016-06-23|2020-07-22|セイコーエプソン株式会社|Video processing device, display device, and video processing method|
WO2018000126A1|2016-06-27|2018-01-04|Intel Corporation|Method and system of multi-dynamic range multi-layer video blending with alpha channel sideband for video playback|
US9916638B2|2016-07-20|2018-03-13|Dolby Laboratories Licensing Corporation|Transformation of dynamic metadata to support alternate tone rendering|
AU2017308749A1|2016-08-09|2019-02-21|Contrast, Inc.|Real-time HDR video for vehicle control|
WO2018035696A1|2016-08-22|2018-03-01|华为技术有限公司|Image processing method and device|
TWI631505B|2016-08-26|2018-08-01|晨星半導體股份有限公司|Image processing method applied to a display and associated circuit|
JP6276815B1|2016-09-02|2018-02-07|シャープ株式会社|Image processing apparatus, television receiver, image processing system, image processing program, recording medium, and video providing apparatus|
WO2018047753A1|2016-09-09|2018-03-15|パナソニックIpマネジメント株式会社|Display device and signal processing method|
JP2018046556A|2016-09-09|2018-03-22|パナソニックIpマネジメント株式会社|Display device and signal processing method|
JP6751233B2|2016-09-12|2020-09-02|オンキヨー株式会社|Video processor|
KR20180032750A|2016-09-22|2018-04-02|삼성디스플레이 주식회사|Method of processing image and display apparatus performing the same|
US10567727B2|2016-09-29|2020-02-18|Panasonic Intellectual Property Management Co., Ltd.|Reproduction method, creation method, reproduction device, creation device, and recording medium|
GB2554669A|2016-09-30|2018-04-11|Apical Ltd|Image processing|
WO2018070822A1|2016-10-14|2018-04-19|엘지전자 주식회사|Data processing method and device for adaptive image playing|
US10218952B2|2016-11-28|2019-02-26|Microsoft Technology Licensing, Llc|Architecture for rendering high dynamic range video on enhanced dynamic range display devices|
KR20190104330A|2017-01-16|2019-09-09|소니 주식회사|Video processing apparatus, video processing method and program|
US10176561B2|2017-01-27|2019-01-08|Microsoft Technology Licensing, Llc|Content-adaptive adjustments to tone mapping operations for high dynamic range content|
US10104334B2|2017-01-27|2018-10-16|Microsoft Technology Licensing, Llc|Content-adaptive adjustment of display device brightness levels when rendering high dynamic range content|
WO2018147196A1|2017-02-09|2018-08-16|シャープ株式会社|Display device, television receiver, video processing method, backlight control method, reception device, video signal generation device, transmission device, video signal transmission system, reception method, program, control program, and recording medium|
JP6466488B2|2017-02-09|2019-02-06|シャープ株式会社|Display device, television receiver, video processing method, control program, and recording medium|
CN106713907B|2017-02-21|2018-08-03|京东方科技集团股份有限公司|A kind of the HDR image display performance evaluating method and device of display|
JP2018191269A|2017-02-24|2018-11-29|トムソン ライセンシングThomson Licensing|Method and device of reconstructing image data from decoded image data|
JP6381704B1|2017-02-28|2018-08-29|シャープ株式会社|Video signal generating device, receiving device, television receiver, transmission / reception system, control program, and recording medium|
KR102308192B1|2017-03-09|2021-10-05|삼성전자주식회사|Display apparatus and control method thereof|
CN110447051A|2017-03-20|2019-11-12|杜比实验室特许公司|The contrast and coloration of reference scene are kept perceptually|
JP2017143546A|2017-03-21|2017-08-17|ソニー株式会社|Playback device, recording medium, display device and information processing method|
CN109644289B|2017-04-21|2021-11-09|松下知识产权经营株式会社|Reproduction device, reproduction method, display device, and display method|
EP3399497A1|2017-05-05|2018-11-07|Koninklijke Philips N.V.|Optimizing decoded high dynamic range image saturation|
US10403214B2|2017-05-12|2019-09-03|Apple Inc.|Electronic devices with tone mapping to accommodate simultaneous display of standard dynamic range and high dynamic range content|
WO2018235337A1|2017-06-21|2018-12-27|パナソニックIpマネジメント株式会社|Image display system and image display method|
EP3418972A1|2017-06-23|2018-12-26|Thomson Licensing|Method for tone adapting an image to a target peak luminance lt of a target display device|
KR20200024275A|2017-07-07|2020-03-06|가부시키가이샤 한도오따이 에네루기 켄큐쇼|Display system and operation method of display system|
WO2019008819A1|2017-07-07|2019-01-10|パナソニックIpマネジメント株式会社|Image display device and image display method|
WO2019014057A1|2017-07-10|2019-01-17|Contrast, Inc.|Stereoscopic camera|
US10873684B2|2017-07-14|2020-12-22|Panasonic Intellectual Property Management Co., Ltd.|Video display apparatus and video display method|
US10504263B2|2017-08-01|2019-12-10|Samsung Electronics Co., Ltd.|Adaptive high dynamic rangetone mapping with overlay indication|
EP3684061B1|2017-09-13|2021-07-07|Panasonic Intellectual Property Management Co., Ltd.|Video display device and video display method|
EP3688978B1|2017-09-28|2021-07-07|Dolby Laboratories Licensing Corporation|Frame-rate-conversion metadata|
JPWO2019069483A1|2017-10-06|2020-09-17|パナソニックIpマネジメント株式会社|Video display device and video display method|
EP3470976A1|2017-10-12|2019-04-17|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Method and apparatus for efficient delivery and usage of audio messages for high quality of experience|
JP6753386B2|2017-10-25|2020-09-09|ソニー株式会社|Camera system, video processing methods and programs|
KR20190056752A|2017-11-17|2019-05-27|삼성전자주식회사|Display apparatus, method for controlling the same and set top box|
JP6821269B2|2017-12-05|2021-01-27|株式会社ソニー・インタラクティブエンタテインメント|Image processing device and image processing method|
US10972674B2|2017-12-27|2021-04-06|Canon Kabushiki Kaisha|Electronic apparatus|
US11270661B2|2017-12-27|2022-03-08|Panasonic Intellectual Property Management Co., Ltd.|Display apparatus and display method|
JP2019149760A|2018-02-28|2019-09-05|セイコーエプソン株式会社|Circuit device and electronic apparatus|
RU2675281C1|2018-03-28|2018-12-18|федеральное государственное бюджетное образовательное учреждение высшего образования "Национальный исследовательский университет "МЭИ" |Method of identification of linear dynamic system|
CN108681991A|2018-04-04|2018-10-19|上海交通大学|Based on the high dynamic range negative tone mapping method and system for generating confrontation network|
KR20190118336A|2018-04-10|2019-10-18|엘지전자 주식회사|A multimedia device for processing video signal and a method thereof|
JP6652153B2|2018-04-26|2020-02-19|ソニー株式会社|Transmitting device, transmitting method, receiving device and receiving method|
TW201946440A|2018-04-30|2019-12-01|圓剛科技股份有限公司|Video signal conversion device|
US10951888B2|2018-06-04|2021-03-16|Contrast, Inc.|Compressed high dynamic range video|
CN110620935A|2018-06-19|2019-12-27|杭州海康慧影科技有限公司|Image processing method and device|
JP2020024550A|2018-08-07|2020-02-13|キヤノン株式会社|Image processing system, image processing method, and program|
WO2020033573A1|2018-08-10|2020-02-13|Dolby Laboratories Licensing Corporation|Reducing banding artifacts in hdr imaging via adaptive sdr-to-hdr reshaping functions|
CN108986053B|2018-08-21|2021-03-16|北京小米移动软件有限公司|Screen display method and device|
CN111063319B|2018-10-16|2021-05-18|深圳Tcl新技术有限公司|Image dynamic enhancement method and device based on backlight adjustment and computer equipment|
US10957024B2|2018-10-30|2021-03-23|Microsoft Technology Licensing, Llc|Real time tone mapping of high dynamic range image data at time of playback on a lower dynamic range display|
CN109243383B|2018-11-09|2021-12-17|珠海格力电器股份有限公司|Backlight brightness adjusting method of display screen and display screen device|
RU190476U1|2018-11-19|2019-07-02|Федеральное государственное автономное образовательное учреждение высшего образования "Севастопольский государственный университет"|DEVICE FOR TRANSFORMING BRIGHTNESS OF DIGITAL TELEVISION IMAGE|
US20200192548A1|2018-12-13|2020-06-18|Ati Technologies Ulc|Methods and apparatus for displaying a cursor on a high dynamic range display device|
EP3672254A1|2018-12-21|2020-06-24|InterDigital VC Holdings, Inc.|Decoding an image|
KR20200090346A|2019-01-21|2020-07-29|엘지전자 주식회사|Camera device, and electronic apparatus including the same|
CN110062160B|2019-04-09|2021-07-02|Oppo广东移动通信有限公司|Image processing method and device|
US20200327864A1|2019-04-10|2020-10-15|Mediatek Inc.|Video processing system for performing artificial intelligence assisted picture quality enhancement and associated video processing method|
BE1027295B1|2019-06-07|2021-02-01|Stereyo|ACOUSTIC STUDIO SCREEN|
JP6756396B2|2019-08-30|2020-09-16|ソニー株式会社|Playback device, display device, information processing method|
JP6904470B2|2019-08-30|2021-07-14|ソニーグループ株式会社|Playback device, display device, information processing method|
US11270662B2|2020-01-21|2022-03-08|Synaptics Incorporated|Device and method for brightness control of display device based on display brightness value encoding parameters beyond brightness|
法律状态:
2018-12-11| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-11-12| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-06-15| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-08-31| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 20/09/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
EP11182922.2|2011-09-27|
EP11182922|2011-09-27|
US201261588719P| true| 2012-01-20|2012-01-20|
US61/588,719|2012-01-20|
EP12160557.0|2012-03-21|
EP12160557|2012-03-21|
PCT/IB2012/054985|WO2013046096A1|2011-09-27|2012-09-20|Apparatus and method for dynamic range transforming of images|
[返回顶部]